2025-05-14 01:38:44.200075 | Job console starting 2025-05-14 01:38:44.215484 | Updating git repos 2025-05-14 01:38:44.842291 | Cloning repos into workspace 2025-05-14 01:38:45.024238 | Restoring repo states 2025-05-14 01:38:45.046526 | Merging changes 2025-05-14 01:38:45.046603 | Checking out repos 2025-05-14 01:38:45.292446 | Preparing playbooks 2025-05-14 01:38:45.981659 | Running Ansible setup 2025-05-14 01:38:50.282086 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-14 01:38:51.026329 | 2025-05-14 01:38:51.026538 | PLAY [Base pre] 2025-05-14 01:38:51.045088 | 2025-05-14 01:38:51.045354 | TASK [Setup log path fact] 2025-05-14 01:38:51.067295 | orchestrator | ok 2025-05-14 01:38:51.089811 | 2025-05-14 01:38:51.089968 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-14 01:38:51.135009 | orchestrator | ok 2025-05-14 01:38:51.149475 | 2025-05-14 01:38:51.149597 | TASK [emit-job-header : Print job information] 2025-05-14 01:38:51.208822 | # Job Information 2025-05-14 01:38:51.209121 | Ansible Version: 2.16.14 2025-05-14 01:38:51.209187 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-14 01:38:51.209247 | Pipeline: post 2025-05-14 01:38:51.209289 | Executor: 521e9411259a 2025-05-14 01:38:51.209327 | Triggered by: https://github.com/osism/testbed/commit/9019e1994a75db0badaf1f8d27c5d2547d7f8da5 2025-05-14 01:38:51.209444 | Event ID: 3c4e3cac-303d-11f0-99a9-13ac0be31b5e 2025-05-14 01:38:51.219938 | 2025-05-14 01:38:51.220070 | LOOP [emit-job-header : Print node information] 2025-05-14 01:38:51.357639 | orchestrator | ok: 2025-05-14 01:38:51.358083 | orchestrator | # Node Information 2025-05-14 01:38:51.358151 | orchestrator | Inventory Hostname: orchestrator 2025-05-14 01:38:51.358193 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-14 01:38:51.358230 | orchestrator | Username: zuul-testbed05 2025-05-14 01:38:51.358264 | orchestrator | Distro: Debian 12.10 2025-05-14 01:38:51.358305 | orchestrator | Provider: static-testbed 2025-05-14 01:38:51.358339 | orchestrator | Region: 2025-05-14 01:38:51.358394 | orchestrator | Label: testbed-orchestrator 2025-05-14 01:38:51.358430 | orchestrator | Product Name: OpenStack Nova 2025-05-14 01:38:51.358462 | orchestrator | Interface IP: 81.163.193.140 2025-05-14 01:38:51.386293 | 2025-05-14 01:38:51.386497 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-14 01:38:51.867702 | orchestrator -> localhost | changed 2025-05-14 01:38:51.876219 | 2025-05-14 01:38:51.876348 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-14 01:38:52.990172 | orchestrator -> localhost | changed 2025-05-14 01:38:53.013931 | 2025-05-14 01:38:53.014083 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-14 01:38:53.302755 | orchestrator -> localhost | ok 2025-05-14 01:38:53.317882 | 2025-05-14 01:38:53.318049 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-14 01:38:53.353477 | orchestrator | ok 2025-05-14 01:38:53.373214 | orchestrator | included: /var/lib/zuul/builds/be0688f9253a41979af5a828f2206be5/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-14 01:38:53.381604 | 2025-05-14 01:38:53.381711 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-14 01:38:54.214335 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-14 01:38:54.214937 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/be0688f9253a41979af5a828f2206be5/work/be0688f9253a41979af5a828f2206be5_id_rsa 2025-05-14 01:38:54.215045 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/be0688f9253a41979af5a828f2206be5/work/be0688f9253a41979af5a828f2206be5_id_rsa.pub 2025-05-14 01:38:54.215112 | orchestrator -> localhost | The key fingerprint is: 2025-05-14 01:38:54.215172 | orchestrator -> localhost | SHA256:iIVLp530qETCCuhoJcWI098gHtG65usIcN/2gmEpV5c zuul-build-sshkey 2025-05-14 01:38:54.215229 | orchestrator -> localhost | The key's randomart image is: 2025-05-14 01:38:54.215304 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-14 01:38:54.215389 | orchestrator -> localhost | |..++ | 2025-05-14 01:38:54.215453 | orchestrator -> localhost | |+o=.o. | 2025-05-14 01:38:54.215508 | orchestrator -> localhost | |o+o*+o+ . | 2025-05-14 01:38:54.215559 | orchestrator -> localhost | |+.=+.Oo=E | 2025-05-14 01:38:54.215610 | orchestrator -> localhost | |+o..=o=.S | 2025-05-14 01:38:54.215682 | orchestrator -> localhost | |o.+o=o | 2025-05-14 01:38:54.215740 | orchestrator -> localhost | |.o +ooo | 2025-05-14 01:38:54.215796 | orchestrator -> localhost | |... .... | 2025-05-14 01:38:54.215850 | orchestrator -> localhost | |..o. .. | 2025-05-14 01:38:54.215903 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-14 01:38:54.216030 | orchestrator -> localhost | ok: Runtime: 0:00:00.340608 2025-05-14 01:38:54.228690 | 2025-05-14 01:38:54.228821 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-14 01:38:54.261259 | orchestrator | ok 2025-05-14 01:38:54.273289 | orchestrator | included: /var/lib/zuul/builds/be0688f9253a41979af5a828f2206be5/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-14 01:38:54.282494 | 2025-05-14 01:38:54.282596 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-14 01:38:54.306236 | orchestrator | skipping: Conditional result was False 2025-05-14 01:38:54.314688 | 2025-05-14 01:38:54.314791 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-14 01:38:54.909438 | orchestrator | changed 2025-05-14 01:38:54.918339 | 2025-05-14 01:38:54.918488 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-14 01:38:55.200426 | orchestrator | ok 2025-05-14 01:38:55.210826 | 2025-05-14 01:38:55.211057 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-14 01:38:55.668503 | orchestrator | ok 2025-05-14 01:38:55.677566 | 2025-05-14 01:38:55.677693 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-14 01:38:56.151317 | orchestrator | ok 2025-05-14 01:38:56.160115 | 2025-05-14 01:38:56.160252 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-14 01:38:56.195083 | orchestrator | skipping: Conditional result was False 2025-05-14 01:38:56.209401 | 2025-05-14 01:38:56.209560 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-14 01:38:56.672497 | orchestrator -> localhost | changed 2025-05-14 01:38:56.697949 | 2025-05-14 01:38:56.698093 | TASK [add-build-sshkey : Add back temp key] 2025-05-14 01:38:57.057149 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/be0688f9253a41979af5a828f2206be5/work/be0688f9253a41979af5a828f2206be5_id_rsa (zuul-build-sshkey) 2025-05-14 01:38:57.057542 | orchestrator -> localhost | ok: Runtime: 0:00:00.019965 2025-05-14 01:38:57.069503 | 2025-05-14 01:38:57.069647 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-14 01:38:57.493422 | orchestrator | ok 2025-05-14 01:38:57.501605 | 2025-05-14 01:38:57.501737 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-14 01:38:57.536731 | orchestrator | skipping: Conditional result was False 2025-05-14 01:38:57.594678 | 2025-05-14 01:38:57.594813 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-14 01:38:58.007477 | orchestrator | ok 2025-05-14 01:38:58.023308 | 2025-05-14 01:38:58.023469 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-14 01:38:58.067248 | orchestrator | ok 2025-05-14 01:38:58.077004 | 2025-05-14 01:38:58.077124 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-14 01:38:58.396754 | orchestrator -> localhost | ok 2025-05-14 01:38:58.413227 | 2025-05-14 01:38:58.413438 | TASK [validate-host : Collect information about the host] 2025-05-14 01:38:59.665658 | orchestrator | ok 2025-05-14 01:38:59.683857 | 2025-05-14 01:38:59.683996 | TASK [validate-host : Sanitize hostname] 2025-05-14 01:38:59.760592 | orchestrator | ok 2025-05-14 01:38:59.769593 | 2025-05-14 01:38:59.769750 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-14 01:39:00.343978 | orchestrator -> localhost | changed 2025-05-14 01:39:00.359334 | 2025-05-14 01:39:00.359558 | TASK [validate-host : Collect information about zuul worker] 2025-05-14 01:39:00.825130 | orchestrator | ok 2025-05-14 01:39:00.834043 | 2025-05-14 01:39:00.834199 | TASK [validate-host : Write out all zuul information for each host] 2025-05-14 01:39:01.384904 | orchestrator -> localhost | changed 2025-05-14 01:39:01.404421 | 2025-05-14 01:39:01.404602 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-14 01:39:01.697269 | orchestrator | ok 2025-05-14 01:39:01.703782 | 2025-05-14 01:39:01.703895 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-14 01:39:20.727616 | orchestrator | changed: 2025-05-14 01:39:20.727921 | orchestrator | .d..t...... src/ 2025-05-14 01:39:20.727978 | orchestrator | .d..t...... src/github.com/ 2025-05-14 01:39:20.728019 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-14 01:39:20.728055 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-14 01:39:20.728089 | orchestrator | RedHat.yml 2025-05-14 01:39:20.741088 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-14 01:39:20.741107 | orchestrator | RedHat.yml 2025-05-14 01:39:20.741164 | orchestrator | = 1.53.0"... 2025-05-14 01:39:33.198819 | orchestrator | 01:39:33.198 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-14 01:39:34.840989 | orchestrator | 01:39:34.840 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-05-14 01:39:35.821193 | orchestrator | 01:39:35.820 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-05-14 01:39:37.057906 | orchestrator | 01:39:37.057 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-14 01:39:37.999277 | orchestrator | 01:39:37.999 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-14 01:39:39.217293 | orchestrator | 01:39:39.217 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-14 01:39:39.999090 | orchestrator | 01:39:39.998 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-14 01:39:39.999235 | orchestrator | 01:39:39.998 STDOUT terraform: Providers are signed by their developers. 2025-05-14 01:39:39.999253 | orchestrator | 01:39:39.998 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-14 01:39:39.999268 | orchestrator | 01:39:39.998 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-14 01:39:39.999292 | orchestrator | 01:39:39.998 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-14 01:39:39.999315 | orchestrator | 01:39:39.999 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-14 01:39:39.999332 | orchestrator | 01:39:39.999 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-14 01:39:39.999343 | orchestrator | 01:39:39.999 STDOUT terraform: you run "tofu init" in the future. 2025-05-14 01:39:39.999358 | orchestrator | 01:39:39.999 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-14 01:39:39.999370 | orchestrator | 01:39:39.999 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-14 01:39:39.999386 | orchestrator | 01:39:39.999 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-14 01:39:39.999398 | orchestrator | 01:39:39.999 STDOUT terraform: should now work. 2025-05-14 01:39:39.999409 | orchestrator | 01:39:39.999 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-14 01:39:39.999424 | orchestrator | 01:39:39.999 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-14 01:39:39.999532 | orchestrator | 01:39:39.999 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-14 01:39:40.178146 | orchestrator | 01:39:40.177 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-14 01:39:40.419232 | orchestrator | 01:39:40.418 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-14 01:39:40.419331 | orchestrator | 01:39:40.419 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-14 01:39:40.419464 | orchestrator | 01:39:40.419 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-14 01:39:40.419515 | orchestrator | 01:39:40.419 STDOUT terraform: for this configuration. 2025-05-14 01:39:40.656026 | orchestrator | 01:39:40.655 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-14 01:39:40.784501 | orchestrator | 01:39:40.784 STDOUT terraform: ci.auto.tfvars 2025-05-14 01:39:40.789398 | orchestrator | 01:39:40.789 STDOUT terraform: default_custom.tf 2025-05-14 01:39:41.049401 | orchestrator | 01:39:41.049 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-14 01:39:42.106976 | orchestrator | 01:39:42.106 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-14 01:39:42.614818 | orchestrator | 01:39:42.614 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-14 01:39:42.836516 | orchestrator | 01:39:42.836 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-14 01:39:42.836634 | orchestrator | 01:39:42.836 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-14 01:39:42.836652 | orchestrator | 01:39:42.836 STDOUT terraform:  + create 2025-05-14 01:39:42.836666 | orchestrator | 01:39:42.836 STDOUT terraform:  <= read (data resources) 2025-05-14 01:39:42.836683 | orchestrator | 01:39:42.836 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-14 01:39:42.836847 | orchestrator | 01:39:42.836 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-14 01:39:42.836870 | orchestrator | 01:39:42.836 STDOUT terraform:  # (config refers to values not yet known) 2025-05-14 01:39:42.836944 | orchestrator | 01:39:42.836 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-14 01:39:42.837041 | orchestrator | 01:39:42.836 STDOUT terraform:  + checksum = (known after apply) 2025-05-14 01:39:42.837085 | orchestrator | 01:39:42.836 STDOUT terraform:  + created_at = (known after apply) 2025-05-14 01:39:42.837181 | orchestrator | 01:39:42.837 STDOUT terraform:  + file = (known after apply) 2025-05-14 01:39:42.837257 | orchestrator | 01:39:42.837 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.837343 | orchestrator | 01:39:42.837 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.837437 | orchestrator | 01:39:42.837 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-14 01:39:42.837508 | orchestrator | 01:39:42.837 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-14 01:39:42.837525 | orchestrator | 01:39:42.837 STDOUT terraform:  + most_recent = true 2025-05-14 01:39:42.837593 | orchestrator | 01:39:42.837 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:39:42.837687 | orchestrator | 01:39:42.837 STDOUT terraform:  + protected = (known after apply) 2025-05-14 01:39:42.837792 | orchestrator | 01:39:42.837 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.837886 | orchestrator | 01:39:42.837 STDOUT terraform:  + schema = (known after apply) 2025-05-14 01:39:42.837928 | orchestrator | 01:39:42.837 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-14 01:39:42.838049 | orchestrator | 01:39:42.837 STDOUT terraform:  + tags = (known after apply) 2025-05-14 01:39:42.838099 | orchestrator | 01:39:42.837 STDOUT terraform:  + updated_at = (known after apply) 2025-05-14 01:39:42.838116 | orchestrator | 01:39:42.838 STDOUT terraform:  } 2025-05-14 01:39:42.838265 | orchestrator | 01:39:42.838 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-14 01:39:42.838339 | orchestrator | 01:39:42.838 STDOUT terraform:  # (config refers to values not yet known) 2025-05-14 01:39:42.838430 | orchestrator | 01:39:42.838 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-14 01:39:42.838523 | orchestrator | 01:39:42.838 STDOUT terraform:  + checksum = (known after apply) 2025-05-14 01:39:42.838570 | orchestrator | 01:39:42.838 STDOUT terraform:  + created_at = (known after apply) 2025-05-14 01:39:42.838644 | orchestrator | 01:39:42.838 STDOUT terraform:  + file = (known after apply) 2025-05-14 01:39:42.838721 | orchestrator | 01:39:42.838 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.838798 | orchestrator | 01:39:42.838 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.838855 | orchestrator | 01:39:42.838 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-14 01:39:42.838929 | orchestrator | 01:39:42.838 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-14 01:39:42.838947 | orchestrator | 01:39:42.838 STDOUT terraform:  + most_recent = true 2025-05-14 01:39:42.839031 | orchestrator | 01:39:42.838 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:39:42.839131 | orchestrator | 01:39:42.839 STDOUT terraform:  + protected = (known after apply) 2025-05-14 01:39:42.839208 | orchestrator | 01:39:42.839 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.839285 | orchestrator | 01:39:42.839 STDOUT terraform:  + schema = (known after apply) 2025-05-14 01:39:42.839357 | orchestrator | 01:39:42.839 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-14 01:39:42.839440 | orchestrator | 01:39:42.839 STDOUT terraform:  + tags = (known after apply) 2025-05-14 01:39:42.839511 | orchestrator | 01:39:42.839 STDOUT terraform:  + updated_at = (known after apply) 2025-05-14 01:39:42.839527 | orchestrator | 01:39:42.839 STDOUT terraform:  } 2025-05-14 01:39:42.839597 | orchestrator | 01:39:42.839 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-14 01:39:42.839671 | orchestrator | 01:39:42.839 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-14 01:39:42.839759 | orchestrator | 01:39:42.839 STDOUT terraform:  + content = (known after apply) 2025-05-14 01:39:42.839850 | orchestrator | 01:39:42.839 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 01:39:42.839954 | orchestrator | 01:39:42.839 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 01:39:42.840053 | orchestrator | 01:39:42.839 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 01:39:42.840126 | orchestrator | 01:39:42.840 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 01:39:42.840364 | orchestrator | 01:39:42.840 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 01:39:42.840440 | orchestrator | 01:39:42.840 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 01:39:42.840456 | orchestrator | 01:39:42.840 STDOUT terraform:  + directory_permission = "0777" 2025-05-14 01:39:42.840475 | orchestrator | 01:39:42.840 STDOUT terraform:  + file_permission = "0644" 2025-05-14 01:39:42.840487 | orchestrator | 01:39:42.840 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-14 01:39:42.840621 | orchestrator | 01:39:42.840 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.840634 | orchestrator | 01:39:42.840 STDOUT terraform:  } 2025-05-14 01:39:42.840719 | orchestrator | 01:39:42.840 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-14 01:39:42.840761 | orchestrator | 01:39:42.840 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-14 01:39:42.840849 | orchestrator | 01:39:42.840 STDOUT terraform:  + content = (known after apply) 2025-05-14 01:39:42.840938 | orchestrator | 01:39:42.840 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 01:39:42.841023 | orchestrator | 01:39:42.840 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 01:39:42.841112 | orchestrator | 01:39:42.841 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 01:39:42.841241 | orchestrator | 01:39:42.841 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 01:39:42.841327 | orchestrator | 01:39:42.841 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 01:39:42.841413 | orchestrator | 01:39:42.841 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 01:39:42.841484 | orchestrator | 01:39:42.841 STDOUT terraform:  + directory_permission = "0777" 2025-05-14 01:39:42.841532 | orchestrator | 01:39:42.841 STDOUT terraform:  + file_permission = "0644" 2025-05-14 01:39:42.841612 | orchestrator | 01:39:42.841 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-14 01:39:42.841705 | orchestrator | 01:39:42.841 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.841721 | orchestrator | 01:39:42.841 STDOUT terraform:  } 2025-05-14 01:39:42.841963 | orchestrator | 01:39:42.841 STDOUT terraform:  # local_file.inventory will be created 2025-05-14 01:39:42.842005 | orchestrator | 01:39:42.841 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-14 01:39:42.842242 | orchestrator | 01:39:42.841 STDOUT terraform:  + content = (known after apply) 2025-05-14 01:39:42.842259 | orchestrator | 01:39:42.842 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 01:39:42.842404 | orchestrator | 01:39:42.842 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 01:39:42.842439 | orchestrator | 01:39:42.842 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 01:39:42.842532 | orchestrator | 01:39:42.842 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 01:39:42.842609 | orchestrator | 01:39:42.842 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 01:39:42.842694 | orchestrator | 01:39:42.842 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 01:39:42.842751 | orchestrator | 01:39:42.842 STDOUT terraform:  + directory_permission = "0777" 2025-05-14 01:39:42.842802 | orchestrator | 01:39:42.842 STDOUT terraform:  + file_permission = "0644" 2025-05-14 01:39:42.842864 | orchestrator | 01:39:42.842 STDOUT terraform:  + filename = "inventory.ci" 2025-05-14 01:39:42.842937 | orchestrator | 01:39:42.842 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.842952 | orchestrator | 01:39:42.842 STDOUT terraform:  } 2025-05-14 01:39:42.843016 | orchestrator | 01:39:42.842 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-14 01:39:42.843077 | orchestrator | 01:39:42.843 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-14 01:39:42.843139 | orchestrator | 01:39:42.843 STDOUT terraform:  + content = (sensitive value) 2025-05-14 01:39:42.843251 | orchestrator | 01:39:42.843 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 01:39:42.843330 | orchestrator | 01:39:42.843 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 01:39:42.843404 | orchestrator | 01:39:42.843 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 01:39:42.843471 | orchestrator | 01:39:42.843 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 01:39:42.843543 | orchestrator | 01:39:42.843 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 01:39:42.843615 | orchestrator | 01:39:42.843 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 01:39:42.843664 | orchestrator | 01:39:42.843 STDOUT terraform:  + directory_permission = "0700" 2025-05-14 01:39:42.843714 | orchestrator | 01:39:42.843 STDOUT terraform:  + file_permission = "0600" 2025-05-14 01:39:42.843777 | orchestrator | 01:39:42.843 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-14 01:39:42.843851 | orchestrator | 01:39:42.843 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.843866 | orchestrator | 01:39:42.843 STDOUT terraform:  } 2025-05-14 01:39:42.843932 | orchestrator | 01:39:42.843 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-14 01:39:42.843991 | orchestrator | 01:39:42.843 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-14 01:39:42.844052 | orchestrator | 01:39:42.843 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.844067 | orchestrator | 01:39:42.844 STDOUT terraform:  } 2025-05-14 01:39:42.844188 | orchestrator | 01:39:42.844 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-14 01:39:42.844283 | orchestrator | 01:39:42.844 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-14 01:39:42.844347 | orchestrator | 01:39:42.844 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.844390 | orchestrator | 01:39:42.844 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.844471 | orchestrator | 01:39:42.844 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.844534 | orchestrator | 01:39:42.844 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.844595 | orchestrator | 01:39:42.844 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.844676 | orchestrator | 01:39:42.844 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-14 01:39:42.844767 | orchestrator | 01:39:42.844 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.844783 | orchestrator | 01:39:42.844 STDOUT terraform:  + size = 80 2025-05-14 01:39:42.844831 | orchestrator | 01:39:42.844 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.844843 | orchestrator | 01:39:42.844 STDOUT terraform:  } 2025-05-14 01:39:42.844934 | orchestrator | 01:39:42.844 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-14 01:39:42.845031 | orchestrator | 01:39:42.844 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:39:42.845094 | orchestrator | 01:39:42.845 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.845130 | orchestrator | 01:39:42.845 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.845251 | orchestrator | 01:39:42.845 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.845295 | orchestrator | 01:39:42.845 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.845361 | orchestrator | 01:39:42.845 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.845439 | orchestrator | 01:39:42.845 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-14 01:39:42.845502 | orchestrator | 01:39:42.845 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.845537 | orchestrator | 01:39:42.845 STDOUT terraform:  + size = 80 2025-05-14 01:39:42.845583 | orchestrator | 01:39:42.845 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.845597 | orchestrator | 01:39:42.845 STDOUT terraform:  } 2025-05-14 01:39:42.845697 | orchestrator | 01:39:42.845 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-14 01:39:42.845781 | orchestrator | 01:39:42.845 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:39:42.845834 | orchestrator | 01:39:42.845 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.845879 | orchestrator | 01:39:42.845 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.845931 | orchestrator | 01:39:42.845 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.845980 | orchestrator | 01:39:42.845 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.846031 | orchestrator | 01:39:42.845 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.846121 | orchestrator | 01:39:42.846 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-14 01:39:42.846198 | orchestrator | 01:39:42.846 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.846223 | orchestrator | 01:39:42.846 STDOUT terraform:  + size = 80 2025-05-14 01:39:42.846402 | orchestrator | 01:39:42.846 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.846496 | orchestrator | 01:39:42.846 STDOUT terraform:  } 2025-05-14 01:39:42.846521 | orchestrator | 01:39:42.846 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-14 01:39:42.846534 | orchestrator | 01:39:42.846 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:39:42.846544 | orchestrator | 01:39:42.846 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.846555 | orchestrator | 01:39:42.846 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.846569 | orchestrator | 01:39:42.846 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.846605 | orchestrator | 01:39:42.846 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.846660 | orchestrator | 01:39:42.846 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.846726 | orchestrator | 01:39:42.846 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-14 01:39:42.846779 | orchestrator | 01:39:42.846 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.846815 | orchestrator | 01:39:42.846 STDOUT terraform:  + size = 80 2025-05-14 01:39:42.846851 | orchestrator | 01:39:42.846 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.846866 | orchestrator | 01:39:42.846 STDOUT terraform:  } 2025-05-14 01:39:42.846949 | orchestrator | 01:39:42.846 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-14 01:39:42.847030 | orchestrator | 01:39:42.846 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:39:42.847084 | orchestrator | 01:39:42.847 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.847120 | orchestrator | 01:39:42.847 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.847224 | orchestrator | 01:39:42.847 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.847243 | orchestrator | 01:39:42.847 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.847288 | orchestrator | 01:39:42.847 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.847353 | orchestrator | 01:39:42.847 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-14 01:39:42.847472 | orchestrator | 01:39:42.847 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.847485 | orchestrator | 01:39:42.847 STDOUT terraform:  + size = 80 2025-05-14 01:39:42.847498 | orchestrator | 01:39:42.847 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.847511 | orchestrator | 01:39:42.847 STDOUT terraform:  } 2025-05-14 01:39:42.847602 | orchestrator | 01:39:42.847 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-14 01:39:42.847679 | orchestrator | 01:39:42.847 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:39:42.847715 | orchestrator | 01:39:42.847 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.847762 | orchestrator | 01:39:42.847 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.847817 | orchestrator | 01:39:42.847 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.847874 | orchestrator | 01:39:42.847 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.847928 | orchestrator | 01:39:42.847 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.847996 | orchestrator | 01:39:42.847 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-14 01:39:42.848049 | orchestrator | 01:39:42.847 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.848064 | orchestrator | 01:39:42.848 STDOUT terraform:  + size = 80 2025-05-14 01:39:42.848115 | orchestrator | 01:39:42.848 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.848130 | orchestrator | 01:39:42.848 STDOUT terraform:  } 2025-05-14 01:39:42.848248 | orchestrator | 01:39:42.848 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-14 01:39:42.848332 | orchestrator | 01:39:42.848 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:39:42.848387 | orchestrator | 01:39:42.848 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.848443 | orchestrator | 01:39:42.848 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.848458 | orchestrator | 01:39:42.848 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.848525 | orchestrator | 01:39:42.848 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.848578 | orchestrator | 01:39:42.848 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.848644 | orchestrator | 01:39:42.848 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-14 01:39:42.848698 | orchestrator | 01:39:42.848 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.848713 | orchestrator | 01:39:42.848 STDOUT terraform:  + size = 80 2025-05-14 01:39:42.848764 | orchestrator | 01:39:42.848 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.848779 | orchestrator | 01:39:42.848 STDOUT terraform:  } 2025-05-14 01:39:42.848859 | orchestrator | 01:39:42.848 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-14 01:39:42.848934 | orchestrator | 01:39:42.848 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:39:42.848986 | orchestrator | 01:39:42.848 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.849002 | orchestrator | 01:39:42.848 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.849070 | orchestrator | 01:39:42.849 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.849123 | orchestrator | 01:39:42.849 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.849232 | orchestrator | 01:39:42.849 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-14 01:39:42.849310 | orchestrator | 01:39:42.849 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.849356 | orchestrator | 01:39:42.849 STDOUT terraform:  + size = 20 2025-05-14 01:39:42.849420 | orchestrator | 01:39:42.849 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.849444 | orchestrator | 01:39:42.849 STDOUT terraform:  } 2025-05-14 01:39:42.849569 | orchestrator | 01:39:42.849 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-14 01:39:42.849655 | orchestrator | 01:39:42.849 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:39:42.849683 | orchestrator | 01:39:42.849 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.849735 | orchestrator | 01:39:42.849 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.849762 | orchestrator | 01:39:42.849 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.849833 | orchestrator | 01:39:42.849 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.849899 | orchestrator | 01:39:42.849 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-14 01:39:42.849946 | orchestrator | 01:39:42.849 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.849991 | orchestrator | 01:39:42.849 STDOUT terraform:  + size = 20 2025-05-14 01:39:42.850043 | orchestrator | 01:39:42.849 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.850062 | orchestrator | 01:39:42.849 STDOUT terraform:  } 2025-05-14 01:39:42.850124 | orchestrator | 01:39:42.850 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-14 01:39:42.850242 | orchestrator | 01:39:42.850 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:39:42.850295 | orchestrator | 01:39:42.850 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.850334 | orchestrator | 01:39:42.850 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.850388 | orchestrator | 01:39:42.850 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.850442 | orchestrator | 01:39:42.850 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.850509 | orchestrator | 01:39:42.850 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-14 01:39:42.850565 | orchestrator | 01:39:42.850 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.850600 | orchestrator | 01:39:42.850 STDOUT terraform:  + size = 20 2025-05-14 01:39:42.850634 | orchestrator | 01:39:42.850 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.850649 | orchestrator | 01:39:42.850 STDOUT terraform:  } 2025-05-14 01:39:42.850725 | orchestrator | 01:39:42.850 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-14 01:39:42.850802 | orchestrator | 01:39:42.850 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:39:42.850851 | orchestrator | 01:39:42.850 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.850885 | orchestrator | 01:39:42.850 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.850935 | orchestrator | 01:39:42.850 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.850984 | orchestrator | 01:39:42.850 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.851043 | orchestrator | 01:39:42.850 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-14 01:39:42.851093 | orchestrator | 01:39:42.851 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.851125 | orchestrator | 01:39:42.851 STDOUT terraform:  + size = 20 2025-05-14 01:39:42.851178 | orchestrator | 01:39:42.851 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.851194 | orchestrator | 01:39:42.851 STDOUT terraform:  } 2025-05-14 01:39:42.851285 | orchestrator | 01:39:42.851 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-14 01:39:42.851357 | orchestrator | 01:39:42.851 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:39:42.851406 | orchestrator | 01:39:42.851 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.851440 | orchestrator | 01:39:42.851 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.851490 | orchestrator | 01:39:42.851 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.851541 | orchestrator | 01:39:42.851 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.851600 | orchestrator | 01:39:42.851 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-14 01:39:42.851654 | orchestrator | 01:39:42.851 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.851700 | orchestrator | 01:39:42.851 STDOUT terraform:  + size = 20 2025-05-14 01:39:42.851715 | orchestrator | 01:39:42.851 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.851728 | orchestrator | 01:39:42.851 STDOUT terraform:  } 2025-05-14 01:39:42.851804 | orchestrator | 01:39:42.851 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-14 01:39:42.851873 | orchestrator | 01:39:42.851 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:39:42.851925 | orchestrator | 01:39:42.851 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.851957 | orchestrator | 01:39:42.851 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.852008 | orchestrator | 01:39:42.851 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.852058 | orchestrator | 01:39:42.851 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.852117 | orchestrator | 01:39:42.852 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-14 01:39:42.852196 | orchestrator | 01:39:42.852 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.852361 | orchestrator | 01:39:42.852 STDOUT terraform:  + size = 20 2025-05-14 01:39:42.852419 | orchestrator | 01:39:42.852 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.852433 | orchestrator | 01:39:42.852 STDOUT terraform:  } 2025-05-14 01:39:42.856539 | orchestrator | 01:39:42.852 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-14 01:39:42.857496 | orchestrator | 01:39:42.856 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:39:42.857569 | orchestrator | 01:39:42.857 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.857582 | orchestrator | 01:39:42.857 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.857600 | orchestrator | 01:39:42.857 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.857611 | orchestrator | 01:39:42.857 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.857621 | orchestrator | 01:39:42.857 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-14 01:39:42.857634 | orchestrator | 01:39:42.857 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.857644 | orchestrator | 01:39:42.857 STDOUT terraform:  + size = 20 2025-05-14 01:39:42.857658 | orchestrator | 01:39:42.857 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.857671 | orchestrator | 01:39:42.857 STDOUT terraform:  } 2025-05-14 01:39:42.857738 | orchestrator | 01:39:42.857 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-14 01:39:42.857792 | orchestrator | 01:39:42.857 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:39:42.857844 | orchestrator | 01:39:42.857 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.857856 | orchestrator | 01:39:42.857 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.857893 | orchestrator | 01:39:42.857 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.857930 | orchestrator | 01:39:42.857 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.857977 | orchestrator | 01:39:42.857 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-14 01:39:42.858012 | orchestrator | 01:39:42.857 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.858057 | orchestrator | 01:39:42.857 STDOUT terraform:  + size = 20 2025-05-14 01:39:42.858071 | orchestrator | 01:39:42.858 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.858084 | orchestrator | 01:39:42.858 STDOUT terraform:  } 2025-05-14 01:39:42.858155 | orchestrator | 01:39:42.858 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-14 01:39:42.858218 | orchestrator | 01:39:42.858 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:39:42.858262 | orchestrator | 01:39:42.858 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:39:42.858276 | orchestrator | 01:39:42.858 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.858327 | orchestrator | 01:39:42.858 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.858363 | orchestrator | 01:39:42.858 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:39:42.858410 | orchestrator | 01:39:42.858 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-14 01:39:42.858453 | orchestrator | 01:39:42.858 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.858488 | orchestrator | 01:39:42.858 STDOUT terraform:  + size = 20 2025-05-14 01:39:42.858505 | orchestrator | 01:39:42.858 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:39:42.858540 | orchestrator | 01:39:42.858 STDOUT terraform:  } 2025-05-14 01:39:42.858556 | orchestrator | 01:39:42.858 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-14 01:39:42.858612 | orchestrator | 01:39:42.858 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-14 01:39:42.858635 | orchestrator | 01:39:42.858 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:39:42.858684 | orchestrator | 01:39:42.858 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:39:42.858709 | orchestrator | 01:39:42.858 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:39:42.858760 | orchestrator | 01:39:42.858 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.858783 | orchestrator | 01:39:42.858 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.858805 | orchestrator | 01:39:42.858 STDOUT terraform:  + config_drive = true 2025-05-14 01:39:42.858855 | orchestrator | 01:39:42.858 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:39:42.858903 | orchestrator | 01:39:42.858 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:39:42.858917 | orchestrator | 01:39:42.858 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-14 01:39:42.858951 | orchestrator | 01:39:42.858 STDOUT terraform:  + force_delete = false 2025-05-14 01:39:42.858997 | orchestrator | 01:39:42.858 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.859042 | orchestrator | 01:39:42.858 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.859084 | orchestrator | 01:39:42.859 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:39:42.859098 | orchestrator | 01:39:42.859 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:39:42.859152 | orchestrator | 01:39:42.859 STDOUT terraform:  + name = "testbed-manager" 2025-05-14 01:39:42.859194 | orchestrator | 01:39:42.859 STDOUT terraform:  + power_state = "active" 2025-05-14 01:39:42.859230 | orchestrator | 01:39:42.859 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.859277 | orchestrator | 01:39:42.859 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:39:42.859291 | orchestrator | 01:39:42.859 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:39:42.859439 | orchestrator | 01:39:42.859 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:39:42.859466 | orchestrator | 01:39:42.859 STDOUT terraform:  + user_data = (known after apply) 2025-05-14 01:39:42.859484 | orchestrator | 01:39:42.859 STDOUT terraform:  + block_device { 2025-05-14 01:39:42.859502 | orchestrator | 01:39:42.859 STDOUT terraform:  + boot_index = 0 2025-05-14 01:39:42.859525 | orchestrator | 01:39:42.859 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:39:42.859542 | orchestrator | 01:39:42.859 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:39:42.859559 | orchestrator | 01:39:42.859 STDOUT terraform:  + multiattach = false 2025-05-14 01:39:42.859579 | orchestrator | 01:39:42.859 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:39:42.859618 | orchestrator | 01:39:42.859 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.859634 | orchestrator | 01:39:42.859 STDOUT terraform:  } 2025-05-14 01:39:42.859660 | orchestrator | 01:39:42.859 STDOUT terraform:  + network { 2025-05-14 01:39:42.859681 | orchestrator | 01:39:42.859 STDOUT terraform:  + access_network = false 2025-05-14 01:39:42.859698 | orchestrator | 01:39:42.859 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:39:42.859717 | orchestrator | 01:39:42.859 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:39:42.859738 | orchestrator | 01:39:42.859 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:39:42.859795 | orchestrator | 01:39:42.859 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:39:42.859818 | orchestrator | 01:39:42.859 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:39:42.859839 | orchestrator | 01:39:42.859 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.859860 | orchestrator | 01:39:42.859 STDOUT terraform:  } 2025-05-14 01:39:42.859881 | orchestrator | 01:39:42.859 STDOUT terraform:  } 2025-05-14 01:39:42.859935 | orchestrator | 01:39:42.859 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-14 01:39:42.859976 | orchestrator | 01:39:42.859 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:39:42.860028 | orchestrator | 01:39:42.859 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:39:42.860053 | orchestrator | 01:39:42.860 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:39:42.860103 | orchestrator | 01:39:42.860 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:39:42.860127 | orchestrator | 01:39:42.860 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.860189 | orchestrator | 01:39:42.860 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.860253 | orchestrator | 01:39:42.860 STDOUT terraform:  + config_drive = true 2025-05-14 01:39:42.860315 | orchestrator | 01:39:42.860 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:39:42.860342 | orchestrator | 01:39:42.860 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:39:42.860363 | orchestrator | 01:39:42.860 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:39:42.860384 | orchestrator | 01:39:42.860 STDOUT terraform:  + force_delete = false 2025-05-14 01:39:42.860438 | orchestrator | 01:39:42.860 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.860462 | orchestrator | 01:39:42.860 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.860526 | orchestrator | 01:39:42.860 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:39:42.860545 | orchestrator | 01:39:42.860 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:39:42.860568 | orchestrator | 01:39:42.860 STDOUT terraform:  + name = "testbed-node-0" 2025-05-14 01:39:42.860589 | orchestrator | 01:39:42.860 STDOUT terraform:  + power_state = "active" 2025-05-14 01:39:42.860625 | orchestrator | 01:39:42.860 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.860692 | orchestrator | 01:39:42.860 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:39:42.860711 | orchestrator | 01:39:42.860 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:39:42.860732 | orchestrator | 01:39:42.860 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:39:42.860800 | orchestrator | 01:39:42.860 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:39:42.860820 | orchestrator | 01:39:42.860 STDOUT terraform:  + block_device { 2025-05-14 01:39:42.860843 | orchestrator | 01:39:42.860 STDOUT terraform:  + boot_index = 0 2025-05-14 01:39:42.860865 | orchestrator | 01:39:42.860 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:39:42.860888 | orchestrator | 01:39:42.860 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:39:42.860911 | orchestrator | 01:39:42.860 STDOUT terraform:  + multiattach = false 2025-05-14 01:39:42.860977 | orchestrator | 01:39:42.860 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:39:42.861002 | orchestrator | 01:39:42.860 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.861022 | orchestrator | 01:39:42.860 STDOUT terraform:  } 2025-05-14 01:39:42.861045 | orchestrator | 01:39:42.860 STDOUT terraform:  + network { 2025-05-14 01:39:42.861063 | orchestrator | 01:39:42.861 STDOUT terraform:  + access_network = false 2025-05-14 01:39:42.861083 | orchestrator | 01:39:42.861 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:39:42.861105 | orchestrator | 01:39:42.861 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:39:42.861127 | orchestrator | 01:39:42.861 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:39:42.861224 | orchestrator | 01:39:42.861 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:39:42.861247 | orchestrator | 01:39:42.861 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:39:42.861268 | orchestrator | 01:39:42.861 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.861285 | orchestrator | 01:39:42.861 STDOUT terraform:  } 2025-05-14 01:39:42.861301 | orchestrator | 01:39:42.861 STDOUT terraform:  } 2025-05-14 01:39:42.861321 | orchestrator | 01:39:42.861 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-14 01:39:42.861375 | orchestrator | 01:39:42.861 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:39:42.861401 | orchestrator | 01:39:42.861 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:39:42.861453 | orchestrator | 01:39:42.861 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:39:42.861477 | orchestrator | 01:39:42.861 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:39:42.861527 | orchestrator | 01:39:42.861 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.861553 | orchestrator | 01:39:42.861 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.861585 | orchestrator | 01:39:42.861 STDOUT terraform:  + config_drive = true 2025-05-14 01:39:42.861608 | orchestrator | 01:39:42.861 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:39:42.861629 | orchestrator | 01:39:42.861 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:39:42.861701 | orchestrator | 01:39:42.861 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:39:42.861721 | orchestrator | 01:39:42.861 STDOUT terraform:  + force_delete = false 2025-05-14 01:39:42.861743 | orchestrator | 01:39:42.861 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.861796 | orchestrator | 01:39:42.861 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.861820 | orchestrator | 01:39:42.861 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:39:42.861887 | orchestrator | 01:39:42.861 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:39:42.861907 | orchestrator | 01:39:42.861 STDOUT terraform:  + name = "testbed-node-1" 2025-05-14 01:39:42.861929 | orchestrator | 01:39:42.861 STDOUT terraform:  + power_state = "active" 2025-05-14 01:39:42.861951 | orchestrator | 01:39:42.861 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.862005 | orchestrator | 01:39:42.861 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:39:42.862060 | orchestrator | 01:39:42.861 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:39:42.862083 | orchestrator | 01:39:42.862 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:39:42.862295 | orchestrator | 01:39:42.862 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:39:42.862399 | orchestrator | 01:39:42.862 STDOUT terraform:  + block_device { 2025-05-14 01:39:42.862427 | orchestrator | 01:39:42.862 STDOUT terraform:  + boot_index = 0 2025-05-14 01:39:42.862452 | orchestrator | 01:39:42.862 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:39:42.862465 | orchestrator | 01:39:42.862 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:39:42.862476 | orchestrator | 01:39:42.862 STDOUT terraform:  + multiattach = false 2025-05-14 01:39:42.862487 | orchestrator | 01:39:42.862 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:39:42.862502 | orchestrator | 01:39:42.862 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.862514 | orchestrator | 01:39:42.862 STDOUT terraform:  } 2025-05-14 01:39:42.862608 | orchestrator | 01:39:42.862 STDOUT terraform:  + network { 2025-05-14 01:39:42.862621 | orchestrator | 01:39:42.862 STDOUT terraform:  + access_network = false 2025-05-14 01:39:42.862633 | orchestrator | 01:39:42.862 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:39:42.862647 | orchestrator | 01:39:42.862 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:39:42.862658 | orchestrator | 01:39:42.862 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:39:42.862767 | orchestrator | 01:39:42.862 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:39:42.862804 | orchestrator | 01:39:42.862 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:39:42.862820 | orchestrator | 01:39:42.862 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.862832 | orchestrator | 01:39:42.862 STDOUT terraform:  } 2025-05-14 01:39:42.862843 | orchestrator | 01:39:42.862 STDOUT terraform:  } 2025-05-14 01:39:42.862858 | orchestrator | 01:39:42.862 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-14 01:39:42.862904 | orchestrator | 01:39:42.862 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:39:42.862943 | orchestrator | 01:39:42.862 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:39:42.862980 | orchestrator | 01:39:42.862 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:39:42.863018 | orchestrator | 01:39:42.862 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:39:42.863056 | orchestrator | 01:39:42.863 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.863086 | orchestrator | 01:39:42.863 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.863102 | orchestrator | 01:39:42.863 STDOUT terraform:  + config_drive = true 2025-05-14 01:39:42.863191 | orchestrator | 01:39:42.863 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:39:42.863249 | orchestrator | 01:39:42.863 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:39:42.863284 | orchestrator | 01:39:42.863 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:39:42.863322 | orchestrator | 01:39:42.863 STDOUT terraform:  + force_delete = false 2025-05-14 01:39:42.863395 | orchestrator | 01:39:42.863 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.863437 | orchestrator | 01:39:42.863 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.863500 | orchestrator | 01:39:42.863 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:39:42.863544 | orchestrator | 01:39:42.863 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:39:42.863599 | orchestrator | 01:39:42.863 STDOUT terraform:  + name = "testbed-node-2" 2025-05-14 01:39:42.863647 | orchestrator | 01:39:42.863 STDOUT terraform:  + power_state = "active" 2025-05-14 01:39:42.863713 | orchestrator | 01:39:42.863 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.863778 | orchestrator | 01:39:42.863 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:39:42.863817 | orchestrator | 01:39:42.863 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:39:42.863881 | orchestrator | 01:39:42.863 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:39:42.863974 | orchestrator | 01:39:42.863 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:39:42.864012 | orchestrator | 01:39:42.863 STDOUT terraform:  + block_device { 2025-05-14 01:39:42.864054 | orchestrator | 01:39:42.864 STDOUT terraform:  + boot_index = 0 2025-05-14 01:39:42.864110 | orchestrator | 01:39:42.864 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:39:42.864194 | orchestrator | 01:39:42.864 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:39:42.864236 | orchestrator | 01:39:42.864 STDOUT terraform:  + multiattach = false 2025-05-14 01:39:42.864291 | orchestrator | 01:39:42.864 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:39:42.864364 | orchestrator | 01:39:42.864 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.864380 | orchestrator | 01:39:42.864 STDOUT terraform:  } 2025-05-14 01:39:42.864422 | orchestrator | 01:39:42.864 STDOUT terraform:  + network { 2025-05-14 01:39:42.864455 | orchestrator | 01:39:42.864 STDOUT terraform:  + access_network = false 2025-05-14 01:39:42.864524 | orchestrator | 01:39:42.864 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:39:42.864577 | orchestrator | 01:39:42.864 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:39:42.864664 | orchestrator | 01:39:42.864 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:39:42.864680 | orchestrator | 01:39:42.864 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:39:42.864741 | orchestrator | 01:39:42.864 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:39:42.864801 | orchestrator | 01:39:42.864 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.864817 | orchestrator | 01:39:42.864 STDOUT terraform:  } 2025-05-14 01:39:42.864851 | orchestrator | 01:39:42.864 STDOUT terraform:  } 2025-05-14 01:39:42.864953 | orchestrator | 01:39:42.864 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-14 01:39:42.865013 | orchestrator | 01:39:42.864 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:39:42.865051 | orchestrator | 01:39:42.864 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:39:42.865089 | orchestrator | 01:39:42.865 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:39:42.865148 | orchestrator | 01:39:42.865 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:39:42.865243 | orchestrator | 01:39:42.865 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.865272 | orchestrator | 01:39:42.865 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.865297 | orchestrator | 01:39:42.865 STDOUT terraform:  + config_drive = true 2025-05-14 01:39:42.865323 | orchestrator | 01:39:42.865 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:39:42.865380 | orchestrator | 01:39:42.865 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:39:42.865408 | orchestrator | 01:39:42.865 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:39:42.865429 | orchestrator | 01:39:42.865 STDOUT terraform:  + force_delete = false 2025-05-14 01:39:42.865447 | orchestrator | 01:39:42.865 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.865488 | orchestrator | 01:39:42.865 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.865523 | orchestrator | 01:39:42.865 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:39:42.865552 | orchestrator | 01:39:42.865 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:39:42.865566 | orchestrator | 01:39:42.865 STDOUT terraform:  + name = "testbed-node-3" 2025-05-14 01:39:42.865603 | orchestrator | 01:39:42.865 STDOUT terraform:  + power_state = "active" 2025-05-14 01:39:42.865639 | orchestrator | 01:39:42.865 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.865675 | orchestrator | 01:39:42.865 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:39:42.865710 | orchestrator | 01:39:42.865 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:39:42.865757 | orchestrator | 01:39:42.865 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:39:42.865821 | orchestrator | 01:39:42.865 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:39:42.865847 | orchestrator | 01:39:42.865 STDOUT terraform:  + block_device { 2025-05-14 01:39:42.865886 | orchestrator | 01:39:42.865 STDOUT terraform:  + boot_index = 0 2025-05-14 01:39:42.865920 | orchestrator | 01:39:42.865 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:39:42.865980 | orchestrator | 01:39:42.865 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:39:42.866060 | orchestrator | 01:39:42.865 STDOUT terraform:  + multiattach = false 2025-05-14 01:39:42.866080 | orchestrator | 01:39:42.866 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:39:42.866153 | orchestrator | 01:39:42.866 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.866197 | orchestrator | 01:39:42.866 STDOUT terraform:  } 2025-05-14 01:39:42.866210 | orchestrator | 01:39:42.866 STDOUT terraform:  + network { 2025-05-14 01:39:42.866225 | orchestrator | 01:39:42.866 STDOUT terraform:  + access_network = false 2025-05-14 01:39:42.866239 | orchestrator | 01:39:42.866 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:39:42.866254 | orchestrator | 01:39:42.866 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:39:42.866305 | orchestrator | 01:39:42.866 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:39:42.866322 | orchestrator | 01:39:42.866 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:39:42.866360 | orchestrator | 01:39:42.866 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:39:42.866376 | orchestrator | 01:39:42.866 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.866390 | orchestrator | 01:39:42.866 STDOUT terraform:  } 2025-05-14 01:39:42.866406 | orchestrator | 01:39:42.866 STDOUT terraform:  } 2025-05-14 01:39:42.866453 | orchestrator | 01:39:42.866 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-14 01:39:42.866492 | orchestrator | 01:39:42.866 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:39:42.866509 | orchestrator | 01:39:42.866 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:39:42.866558 | orchestrator | 01:39:42.866 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:39:42.866597 | orchestrator | 01:39:42.866 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:39:42.866612 | orchestrator | 01:39:42.866 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.866627 | orchestrator | 01:39:42.866 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.866643 | orchestrator | 01:39:42.866 STDOUT terraform:  + config_drive = true 2025-05-14 01:39:42.866694 | orchestrator | 01:39:42.866 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:39:42.866711 | orchestrator | 01:39:42.866 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:39:42.866769 | orchestrator | 01:39:42.866 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:39:42.866795 | orchestrator | 01:39:42.866 STDOUT terraform:  + force_delete = false 2025-05-14 01:39:42.866818 | orchestrator | 01:39:42.866 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.866840 | orchestrator | 01:39:42.866 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.866913 | orchestrator | 01:39:42.866 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:39:42.866936 | orchestrator | 01:39:42.866 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:39:42.866962 | orchestrator | 01:39:42.866 STDOUT terraform:  + name = "testbed-node-4" 2025-05-14 01:39:42.866983 | orchestrator | 01:39:42.866 STDOUT terraform:  + power_state = "active" 2025-05-14 01:39:42.867007 | orchestrator | 01:39:42.866 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.867027 | orchestrator | 01:39:42.866 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:39:42.867051 | orchestrator | 01:39:42.867 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:39:42.867077 | orchestrator | 01:39:42.867 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:39:42.867122 | orchestrator | 01:39:42.867 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:39:42.867149 | orchestrator | 01:39:42.867 STDOUT terraform:  + block_device { 2025-05-14 01:39:42.867214 | orchestrator | 01:39:42.867 STDOUT terraform:  + boot_index = 0 2025-05-14 01:39:42.867236 | orchestrator | 01:39:42.867 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:39:42.867260 | orchestrator | 01:39:42.867 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:39:42.867280 | orchestrator | 01:39:42.867 STDOUT terraform:  + multiattach = false 2025-05-14 01:39:42.867304 | orchestrator | 01:39:42.867 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:39:42.867328 | orchestrator | 01:39:42.867 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.867346 | orchestrator | 01:39:42.867 STDOUT terraform:  } 2025-05-14 01:39:42.867368 | orchestrator | 01:39:42.867 STDOUT terraform:  + network { 2025-05-14 01:39:42.867386 | orchestrator | 01:39:42.867 STDOUT terraform:  + access_network = false 2025-05-14 01:39:42.867408 | orchestrator | 01:39:42.867 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:39:42.867450 | orchestrator | 01:39:42.867 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:39:42.867469 | orchestrator | 01:39:42.867 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:39:42.867491 | orchestrator | 01:39:42.867 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:39:42.867514 | orchestrator | 01:39:42.867 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:39:42.867536 | orchestrator | 01:39:42.867 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.867554 | orchestrator | 01:39:42.867 STDOUT terraform:  } 2025-05-14 01:39:42.867576 | orchestrator | 01:39:42.867 STDOUT terraform:  } 2025-05-14 01:39:42.867598 | orchestrator | 01:39:42.867 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-14 01:39:42.867622 | orchestrator | 01:39:42.867 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:39:42.867666 | orchestrator | 01:39:42.867 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:39:42.867691 | orchestrator | 01:39:42.867 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:39:42.867715 | orchestrator | 01:39:42.867 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:39:42.867777 | orchestrator | 01:39:42.867 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.867798 | orchestrator | 01:39:42.867 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:39:42.867832 | orchestrator | 01:39:42.867 STDOUT terraform:  + config_drive = true 2025-05-14 01:39:42.867851 | orchestrator | 01:39:42.867 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:39:42.867874 | orchestrator | 01:39:42.867 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:39:42.867892 | orchestrator | 01:39:42.867 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:39:42.867913 | orchestrator | 01:39:42.867 STDOUT terraform:  + force_delete = false 2025-05-14 01:39:42.867936 | orchestrator | 01:39:42.867 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.867996 | orchestrator | 01:39:42.867 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:39:42.868095 | orchestrator | 01:39:42.867 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:39:42.868116 | orchestrator | 01:39:42.867 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:39:42.868133 | orchestrator | 01:39:42.868 STDOUT terraform:  + name = "testbed-node-5" 2025-05-14 01:39:42.868144 | orchestrator | 01:39:42.868 STDOUT terraform:  + power_state = "active" 2025-05-14 01:39:42.868155 | orchestrator | 01:39:42.868 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.868201 | orchestrator | 01:39:42.868 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:39:42.868217 | orchestrator | 01:39:42.868 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:39:42.868260 | orchestrator | 01:39:42.868 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:39:42.868299 | orchestrator | 01:39:42.868 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:39:42.868327 | orchestrator | 01:39:42.868 STDOUT terraform:  + block_device { 2025-05-14 01:39:42.868339 | orchestrator | 01:39:42.868 STDOUT terraform:  + boot_index = 0 2025-05-14 01:39:42.868353 | orchestrator | 01:39:42.868 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:39:42.868402 | orchestrator | 01:39:42.868 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:39:42.868441 | orchestrator | 01:39:42.868 STDOUT terraform:  + multiattach = false 2025-05-14 01:39:42.868480 | orchestrator | 01:39:42.868 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:39:42.868530 | orchestrator | 01:39:42.868 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.868542 | orchestrator | 01:39:42.868 STDOUT terraform:  } 2025-05-14 01:39:42.868554 | orchestrator | 01:39:42.868 STDOUT terraform:  + network { 2025-05-14 01:39:42.868568 | orchestrator | 01:39:42.868 STDOUT terraform:  + access_network = false 2025-05-14 01:39:42.868583 | orchestrator | 01:39:42.868 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:39:42.868631 | orchestrator | 01:39:42.868 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:39:42.868646 | orchestrator | 01:39:42.868 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:39:42.868660 | orchestrator | 01:39:42.868 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:39:42.868709 | orchestrator | 01:39:42.868 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:39:42.868726 | orchestrator | 01:39:42.868 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:39:42.868740 | orchestrator | 01:39:42.868 STDOUT terraform:  } 2025-05-14 01:39:42.868762 | orchestrator | 01:39:42.868 STDOUT terraform:  } 2025-05-14 01:39:42.868777 | orchestrator | 01:39:42.868 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-14 01:39:42.868815 | orchestrator | 01:39:42.868 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-14 01:39:42.868900 | orchestrator | 01:39:42.868 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-14 01:39:42.868913 | orchestrator | 01:39:42.868 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.868924 | orchestrator | 01:39:42.868 STDOUT terraform:  + name = "testbed" 2025-05-14 01:39:42.868935 | orchestrator | 01:39:42.868 STDOUT terraform:  + private_key = (sensitive value) 2025-05-14 01:39:42.868949 | orchestrator | 01:39:42.868 STDOUT terraform:  + public_key = (known after apply) 2025-05-14 01:39:42.868961 | orchestrator | 01:39:42.868 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.868975 | orchestrator | 01:39:42.868 STDOUT terraform:  + user_id = (known after apply) 2025-05-14 01:39:42.868987 | orchestrator | 01:39:42.868 STDOUT terraform:  } 2025-05-14 01:39:42.869038 | orchestrator | 01:39:42.868 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-14 01:39:42.869089 | orchestrator | 01:39:42.869 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:39:42.869115 | orchestrator | 01:39:42.869 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:39:42.869232 | orchestrator | 01:39:42.869 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.869259 | orchestrator | 01:39:42.869 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:39:42.869283 | orchestrator | 01:39:42.869 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.869304 | orchestrator | 01:39:42.869 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:39:42.869323 | orchestrator | 01:39:42.869 STDOUT terraform:  } 2025-05-14 01:39:42.869347 | orchestrator | 01:39:42.869 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-14 01:39:42.869372 | orchestrator | 01:39:42.869 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:39:42.869396 | orchestrator | 01:39:42.869 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:39:42.869413 | orchestrator | 01:39:42.869 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.869469 | orchestrator | 01:39:42.869 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:39:42.869482 | orchestrator | 01:39:42.869 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.869496 | orchestrator | 01:39:42.869 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:39:42.869511 | orchestrator | 01:39:42.869 STDOUT terraform:  } 2025-05-14 01:39:42.869565 | orchestrator | 01:39:42.869 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-14 01:39:42.869620 | orchestrator | 01:39:42.869 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:39:42.869636 | orchestrator | 01:39:42.869 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:39:42.869684 | orchestrator | 01:39:42.869 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.869696 | orchestrator | 01:39:42.869 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:39:42.869711 | orchestrator | 01:39:42.869 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.869748 | orchestrator | 01:39:42.869 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:39:42.869759 | orchestrator | 01:39:42.869 STDOUT terraform:  } 2025-05-14 01:39:42.869804 | orchestrator | 01:39:42.869 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-14 01:39:42.869853 | orchestrator | 01:39:42.869 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:39:42.869868 | orchestrator | 01:39:42.869 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:39:42.869901 | orchestrator | 01:39:42.869 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.869915 | orchestrator | 01:39:42.869 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:39:42.869959 | orchestrator | 01:39:42.869 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.870046 | orchestrator | 01:39:42.869 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:39:42.870067 | orchestrator | 01:39:42.869 STDOUT terraform:  } 2025-05-14 01:39:42.870089 | orchestrator | 01:39:42.869 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-14 01:39:42.870103 | orchestrator | 01:39:42.870 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:39:42.870157 | orchestrator | 01:39:42.870 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:39:42.870196 | orchestrator | 01:39:42.870 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.870230 | orchestrator | 01:39:42.870 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:39:42.870264 | orchestrator | 01:39:42.870 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.870307 | orchestrator | 01:39:42.870 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:39:42.870321 | orchestrator | 01:39:42.870 STDOUT terraform:  } 2025-05-14 01:39:42.870376 | orchestrator | 01:39:42.870 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-14 01:39:42.870426 | orchestrator | 01:39:42.870 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:39:42.870460 | orchestrator | 01:39:42.870 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:39:42.870474 | orchestrator | 01:39:42.870 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.870512 | orchestrator | 01:39:42.870 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:39:42.870545 | orchestrator | 01:39:42.870 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.870559 | orchestrator | 01:39:42.870 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:39:42.870572 | orchestrator | 01:39:42.870 STDOUT terraform:  } 2025-05-14 01:39:42.870629 | orchestrator | 01:39:42.870 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-14 01:39:42.870677 | orchestrator | 01:39:42.870 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:39:42.870711 | orchestrator | 01:39:42.870 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:39:42.870725 | orchestrator | 01:39:42.870 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.870761 | orchestrator | 01:39:42.870 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:39:42.870795 | orchestrator | 01:39:42.870 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.870809 | orchestrator | 01:39:42.870 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:39:42.870822 | orchestrator | 01:39:42.870 STDOUT terraform:  } 2025-05-14 01:39:42.870875 | orchestrator | 01:39:42.870 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-14 01:39:42.870925 | orchestrator | 01:39:42.870 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:39:42.870958 | orchestrator | 01:39:42.870 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:39:42.870972 | orchestrator | 01:39:42.870 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.871009 | orchestrator | 01:39:42.870 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:39:42.871031 | orchestrator | 01:39:42.870 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.871065 | orchestrator | 01:39:42.871 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:39:42.871079 | orchestrator | 01:39:42.871 STDOUT terraform:  } 2025-05-14 01:39:42.871143 | orchestrator | 01:39:42.871 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-14 01:39:42.871225 | orchestrator | 01:39:42.871 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:39:42.871239 | orchestrator | 01:39:42.871 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:39:42.871253 | orchestrator | 01:39:42.871 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.871276 | orchestrator | 01:39:42.871 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:39:42.871311 | orchestrator | 01:39:42.871 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.871325 | orchestrator | 01:39:42.871 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:39:42.871338 | orchestrator | 01:39:42.871 STDOUT terraform:  } 2025-05-14 01:39:42.871401 | orchestrator | 01:39:42.871 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-14 01:39:42.871460 | orchestrator | 01:39:42.871 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-14 01:39:42.871487 | orchestrator | 01:39:42.871 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-14 01:39:42.871519 | orchestrator | 01:39:42.871 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-14 01:39:42.871533 | orchestrator | 01:39:42.871 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.871570 | orchestrator | 01:39:42.871 STDOUT terraform:  + port_id = (known after apply) 2025-05-14 01:39:42.871601 | orchestrator | 01:39:42.871 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.871610 | orchestrator | 01:39:42.871 STDOUT terraform:  } 2025-05-14 01:39:42.871655 | orchestrator | 01:39:42.871 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-14 01:39:42.871702 | orchestrator | 01:39:42.871 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-14 01:39:42.871731 | orchestrator | 01:39:42.871 STDOUT terraform:  + address = (known after apply) 2025-05-14 01:39:42.871759 | orchestrator | 01:39:42.871 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.871771 | orchestrator | 01:39:42.871 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-14 01:39:42.871802 | orchestrator | 01:39:42.871 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:39:42.871831 | orchestrator | 01:39:42.871 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-14 01:39:42.871843 | orchestrator | 01:39:42.871 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.871872 | orchestrator | 01:39:42.871 STDOUT terraform:  + pool = "public" 2025-05-14 01:39:42.871901 | orchestrator | 01:39:42.871 STDOUT terraform:  + port_id = (known after apply) 2025-05-14 01:39:42.871920 | orchestrator | 01:39:42.871 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.871939 | orchestrator | 01:39:42.871 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:39:42.871966 | orchestrator | 01:39:42.871 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.871978 | orchestrator | 01:39:42.871 STDOUT terraform:  } 2025-05-14 01:39:42.872021 | orchestrator | 01:39:42.871 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-14 01:39:42.872098 | orchestrator | 01:39:42.872 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-14 01:39:42.872144 | orchestrator | 01:39:42.872 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:39:42.872223 | orchestrator | 01:39:42.872 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.872237 | orchestrator | 01:39:42.872 STDOUT terraform:  + availability_zone_hints = [ 2025-05-14 01:39:42.872247 | orchestrator | 01:39:42.872 STDOUT terraform:  + "nova", 2025-05-14 01:39:42.872258 | orchestrator | 01:39:42.872 STDOUT terraform:  ] 2025-05-14 01:39:42.872304 | orchestrator | 01:39:42.872 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-14 01:39:42.872343 | orchestrator | 01:39:42.872 STDOUT terraform:  + external = (known after apply) 2025-05-14 01:39:42.872381 | orchestrator | 01:39:42.872 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.872419 | orchestrator | 01:39:42.872 STDOUT terraform:  + mtu = (known after apply) 2025-05-14 01:39:42.872458 | orchestrator | 01:39:42.872 STDOUT terraform:  + name = "net-testbed-management" 2025-05-14 01:39:42.872494 | orchestrator | 01:39:42.872 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:39:42.872532 | orchestrator | 01:39:42.872 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:39:42.872569 | orchestrator | 01:39:42.872 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.872606 | orchestrator | 01:39:42.872 STDOUT terraform:  + shared = (known after apply) 2025-05-14 01:39:42.872696 | orchestrator | 01:39:42.872 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.872713 | orchestrator | 01:39:42.872 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-14 01:39:42.872722 | orchestrator | 01:39:42.872 STDOUT terraform:  + segments (known after apply) 2025-05-14 01:39:42.872733 | orchestrator | 01:39:42.872 STDOUT terraform:  } 2025-05-14 01:39:42.872845 | orchestrator | 01:39:42.872 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-14 01:39:42.872859 | orchestrator | 01:39:42.872 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-14 01:39:42.872901 | orchestrator | 01:39:42.872 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:39:42.872931 | orchestrator | 01:39:42.872 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:39:42.872969 | orchestrator | 01:39:42.872 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:39:42.873006 | orchestrator | 01:39:42.872 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.873027 | orchestrator | 01:39:42.872 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:39:42.873075 | orchestrator | 01:39:42.873 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:39:42.873105 | orchestrator | 01:39:42.873 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:39:42.873144 | orchestrator | 01:39:42.873 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:39:42.873199 | orchestrator | 01:39:42.873 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.873213 | orchestrator | 01:39:42.873 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:39:42.873259 | orchestrator | 01:39:42.873 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:39:42.873289 | orchestrator | 01:39:42.873 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:39:42.873326 | orchestrator | 01:39:42.873 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:39:42.873365 | orchestrator | 01:39:42.873 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.873403 | orchestrator | 01:39:42.873 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:39:42.873433 | orchestrator | 01:39:42.873 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.873444 | orchestrator | 01:39:42.873 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.873473 | orchestrator | 01:39:42.873 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:39:42.873485 | orchestrator | 01:39:42.873 STDOUT terraform:  } 2025-05-14 01:39:42.873496 | orchestrator | 01:39:42.873 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.873533 | orchestrator | 01:39:42.873 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:39:42.873543 | orchestrator | 01:39:42.873 STDOUT terraform:  } 2025-05-14 01:39:42.873558 | orchestrator | 01:39:42.873 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:39:42.873569 | orchestrator | 01:39:42.873 STDOUT terraform:  + fixed_ip { 2025-05-14 01:39:42.873606 | orchestrator | 01:39:42.873 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-14 01:39:42.873619 | orchestrator | 01:39:42.873 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:39:42.873635 | orchestrator | 01:39:42.873 STDOUT terraform:  } 2025-05-14 01:39:42.873650 | orchestrator | 01:39:42.873 STDOUT terraform:  } 2025-05-14 01:39:42.873688 | orchestrator | 01:39:42.873 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-14 01:39:42.873733 | orchestrator | 01:39:42.873 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:39:42.873763 | orchestrator | 01:39:42.873 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:39:42.873806 | orchestrator | 01:39:42.873 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:39:42.873827 | orchestrator | 01:39:42.873 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:39:42.873872 | orchestrator | 01:39:42.873 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.873909 | orchestrator | 01:39:42.873 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:39:42.873966 | orchestrator | 01:39:42.873 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:39:42.874043 | orchestrator | 01:39:42.873 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:39:42.874066 | orchestrator | 01:39:42.874 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:39:42.874112 | orchestrator | 01:39:42.874 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.874144 | orchestrator | 01:39:42.874 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:39:42.874225 | orchestrator | 01:39:42.874 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:39:42.874255 | orchestrator | 01:39:42.874 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:39:42.874268 | orchestrator | 01:39:42.874 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:39:42.874335 | orchestrator | 01:39:42.874 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.874374 | orchestrator | 01:39:42.874 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:39:42.874411 | orchestrator | 01:39:42.874 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.874423 | orchestrator | 01:39:42.874 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.874476 | orchestrator | 01:39:42.874 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:39:42.874489 | orchestrator | 01:39:42.874 STDOUT terraform:  } 2025-05-14 01:39:42.874518 | orchestrator | 01:39:42.874 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.874537 | orchestrator | 01:39:42.874 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:39:42.874566 | orchestrator | 01:39:42.874 STDOUT terraform:  } 2025-05-14 01:39:42.874578 | orchestrator | 01:39:42.874 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.874606 | orchestrator | 01:39:42.874 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:39:42.874618 | orchestrator | 01:39:42.874 STDOUT terraform:  } 2025-05-14 01:39:42.874628 | orchestrator | 01:39:42.874 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.874665 | orchestrator | 01:39:42.874 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:39:42.874674 | orchestrator | 01:39:42.874 STDOUT terraform:  } 2025-05-14 01:39:42.874689 | orchestrator | 01:39:42.874 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:39:42.874699 | orchestrator | 01:39:42.874 STDOUT terraform:  + fixed_ip { 2025-05-14 01:39:42.874734 | orchestrator | 01:39:42.874 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-14 01:39:42.874760 | orchestrator | 01:39:42.874 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:39:42.874770 | orchestrator | 01:39:42.874 STDOUT terraform:  } 2025-05-14 01:39:42.874779 | orchestrator | 01:39:42.874 STDOUT terraform:  } 2025-05-14 01:39:42.874829 | orchestrator | 01:39:42.874 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-14 01:39:42.874873 | orchestrator | 01:39:42.874 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:39:42.874909 | orchestrator | 01:39:42.874 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:39:42.874945 | orchestrator | 01:39:42.874 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:39:42.874980 | orchestrator | 01:39:42.874 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:39:42.875018 | orchestrator | 01:39:42.874 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.875054 | orchestrator | 01:39:42.875 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:39:42.875089 | orchestrator | 01:39:42.875 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:39:42.875138 | orchestrator | 01:39:42.875 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:39:42.875217 | orchestrator | 01:39:42.875 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:39:42.875265 | orchestrator | 01:39:42.875 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.875318 | orchestrator | 01:39:42.875 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:39:42.875379 | orchestrator | 01:39:42.875 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:39:42.875424 | orchestrator | 01:39:42.875 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:39:42.875461 | orchestrator | 01:39:42.875 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:39:42.875498 | orchestrator | 01:39:42.875 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.875535 | orchestrator | 01:39:42.875 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:39:42.875572 | orchestrator | 01:39:42.875 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.875582 | orchestrator | 01:39:42.875 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.875620 | orchestrator | 01:39:42.875 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:39:42.875631 | orchestrator | 01:39:42.875 STDOUT terraform:  } 2025-05-14 01:39:42.875640 | orchestrator | 01:39:42.875 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.875677 | orchestrator | 01:39:42.875 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:39:42.875687 | orchestrator | 01:39:42.875 STDOUT terraform:  } 2025-05-14 01:39:42.875696 | orchestrator | 01:39:42.875 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.875740 | orchestrator | 01:39:42.875 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:39:42.875748 | orchestrator | 01:39:42.875 STDOUT terraform:  } 2025-05-14 01:39:42.875758 | orchestrator | 01:39:42.875 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.875783 | orchestrator | 01:39:42.875 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:39:42.875793 | orchestrator | 01:39:42.875 STDOUT terraform:  } 2025-05-14 01:39:42.875817 | orchestrator | 01:39:42.875 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:39:42.875832 | orchestrator | 01:39:42.875 STDOUT terraform:  + fixed_ip { 2025-05-14 01:39:42.875841 | orchestrator | 01:39:42.875 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-14 01:39:42.875877 | orchestrator | 01:39:42.875 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:39:42.875886 | orchestrator | 01:39:42.875 STDOUT terraform:  } 2025-05-14 01:39:42.875895 | orchestrator | 01:39:42.875 STDOUT terraform:  } 2025-05-14 01:39:42.875963 | orchestrator | 01:39:42.875 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-14 01:39:42.876011 | orchestrator | 01:39:42.875 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:39:42.876048 | orchestrator | 01:39:42.876 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:39:42.876085 | orchestrator | 01:39:42.876 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:39:42.876136 | orchestrator | 01:39:42.876 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:39:42.876235 | orchestrator | 01:39:42.876 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.876273 | orchestrator | 01:39:42.876 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:39:42.876308 | orchestrator | 01:39:42.876 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:39:42.876345 | orchestrator | 01:39:42.876 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:39:42.876383 | orchestrator | 01:39:42.876 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:39:42.876432 | orchestrator | 01:39:42.876 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.876491 | orchestrator | 01:39:42.876 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:39:42.876556 | orchestrator | 01:39:42.876 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:39:42.876598 | orchestrator | 01:39:42.876 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:39:42.876637 | orchestrator | 01:39:42.876 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:39:42.876673 | orchestrator | 01:39:42.876 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.876766 | orchestrator | 01:39:42.876 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:39:42.876790 | orchestrator | 01:39:42.876 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.876798 | orchestrator | 01:39:42.876 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.876807 | orchestrator | 01:39:42.876 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:39:42.876814 | orchestrator | 01:39:42.876 STDOUT terraform:  } 2025-05-14 01:39:42.876821 | orchestrator | 01:39:42.876 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.876830 | orchestrator | 01:39:42.876 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:39:42.876839 | orchestrator | 01:39:42.876 STDOUT terraform:  } 2025-05-14 01:39:42.876856 | orchestrator | 01:39:42.876 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.876894 | orchestrator | 01:39:42.876 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:39:42.876902 | orchestrator | 01:39:42.876 STDOUT terraform:  } 2025-05-14 01:39:42.876911 | orchestrator | 01:39:42.876 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.876947 | orchestrator | 01:39:42.876 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:39:42.876956 | orchestrator | 01:39:42.876 STDOUT terraform:  } 2025-05-14 01:39:42.876989 | orchestrator | 01:39:42.876 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:39:42.876997 | orchestrator | 01:39:42.876 STDOUT terraform:  + fixed_ip { 2025-05-14 01:39:42.877006 | orchestrator | 01:39:42.876 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-14 01:39:42.877043 | orchestrator | 01:39:42.877 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:39:42.877051 | orchestrator | 01:39:42.877 STDOUT terraform:  } 2025-05-14 01:39:42.877060 | orchestrator | 01:39:42.877 STDOUT terraform:  } 2025-05-14 01:39:42.877116 | orchestrator | 01:39:42.877 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-14 01:39:42.877205 | orchestrator | 01:39:42.877 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:39:42.877259 | orchestrator | 01:39:42.877 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:39:42.877286 | orchestrator | 01:39:42.877 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:39:42.877324 | orchestrator | 01:39:42.877 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:39:42.877361 | orchestrator | 01:39:42.877 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.877399 | orchestrator | 01:39:42.877 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:39:42.877436 | orchestrator | 01:39:42.877 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:39:42.877473 | orchestrator | 01:39:42.877 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:39:42.877510 | orchestrator | 01:39:42.877 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:39:42.877547 | orchestrator | 01:39:42.877 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.877583 | orchestrator | 01:39:42.877 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:39:42.877619 | orchestrator | 01:39:42.877 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:39:42.877654 | orchestrator | 01:39:42.877 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:39:42.877690 | orchestrator | 01:39:42.877 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:39:42.877727 | orchestrator | 01:39:42.877 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.877763 | orchestrator | 01:39:42.877 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:39:42.877806 | orchestrator | 01:39:42.877 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.877823 | orchestrator | 01:39:42.877 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.877848 | orchestrator | 01:39:42.877 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:39:42.877856 | orchestrator | 01:39:42.877 STDOUT terraform:  } 2025-05-14 01:39:42.877865 | orchestrator | 01:39:42.877 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.877903 | orchestrator | 01:39:42.877 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:39:42.877914 | orchestrator | 01:39:42.877 STDOUT terraform:  } 2025-05-14 01:39:42.877923 | orchestrator | 01:39:42.877 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.877958 | orchestrator | 01:39:42.877 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:39:42.877969 | orchestrator | 01:39:42.877 STDOUT terraform:  } 2025-05-14 01:39:42.877979 | orchestrator | 01:39:42.877 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.878040 | orchestrator | 01:39:42.877 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:39:42.878053 | orchestrator | 01:39:42.878 STDOUT terraform:  } 2025-05-14 01:39:42.878061 | orchestrator | 01:39:42.878 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:39:42.878070 | orchestrator | 01:39:42.878 STDOUT terraform:  + fixed_ip { 2025-05-14 01:39:42.878095 | orchestrator | 01:39:42.878 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-14 01:39:42.878144 | orchestrator | 01:39:42.878 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:39:42.878155 | orchestrator | 01:39:42.878 STDOUT terraform:  } 2025-05-14 01:39:42.878182 | orchestrator | 01:39:42.878 STDOUT terraform:  } 2025-05-14 01:39:42.878233 | orchestrator | 01:39:42.878 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-14 01:39:42.878280 | orchestrator | 01:39:42.878 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:39:42.878316 | orchestrator | 01:39:42.878 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:39:42.878353 | orchestrator | 01:39:42.878 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:39:42.878389 | orchestrator | 01:39:42.878 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:39:42.878426 | orchestrator | 01:39:42.878 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.878478 | orchestrator | 01:39:42.878 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:39:42.878530 | orchestrator | 01:39:42.878 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:39:42.878580 | orchestrator | 01:39:42.878 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:39:42.878635 | orchestrator | 01:39:42.878 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:39:42.878690 | orchestrator | 01:39:42.878 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.878743 | orchestrator | 01:39:42.878 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:39:42.878797 | orchestrator | 01:39:42.878 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:39:42.878855 | orchestrator | 01:39:42.878 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:39:42.878911 | orchestrator | 01:39:42.878 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:39:42.878951 | orchestrator | 01:39:42.878 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.878988 | orchestrator | 01:39:42.878 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:39:42.879027 | orchestrator | 01:39:42.878 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.879037 | orchestrator | 01:39:42.879 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.879076 | orchestrator | 01:39:42.879 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:39:42.879086 | orchestrator | 01:39:42.879 STDOUT terraform:  } 2025-05-14 01:39:42.879110 | orchestrator | 01:39:42.879 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.879157 | orchestrator | 01:39:42.879 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:39:42.879219 | orchestrator | 01:39:42.879 STDOUT terraform:  } 2025-05-14 01:39:42.879234 | orchestrator | 01:39:42.879 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.879276 | orchestrator | 01:39:42.879 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:39:42.879287 | orchestrator | 01:39:42.879 STDOUT terraform:  } 2025-05-14 01:39:42.879296 | orchestrator | 01:39:42.879 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.879329 | orchestrator | 01:39:42.879 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:39:42.879340 | orchestrator | 01:39:42.879 STDOUT terraform:  } 2025-05-14 01:39:42.879364 | orchestrator | 01:39:42.879 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:39:42.879374 | orchestrator | 01:39:42.879 STDOUT terraform:  + fixed_ip { 2025-05-14 01:39:42.879399 | orchestrator | 01:39:42.879 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-14 01:39:42.879424 | orchestrator | 01:39:42.879 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:39:42.879432 | orchestrator | 01:39:42.879 STDOUT terraform:  } 2025-05-14 01:39:42.879441 | orchestrator | 01:39:42.879 STDOUT terraform:  } 2025-05-14 01:39:42.879490 | orchestrator | 01:39:42.879 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-14 01:39:42.879535 | orchestrator | 01:39:42.879 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:39:42.879571 | orchestrator | 01:39:42.879 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:39:42.879609 | orchestrator | 01:39:42.879 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:39:42.879643 | orchestrator | 01:39:42.879 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:39:42.879681 | orchestrator | 01:39:42.879 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.879717 | orchestrator | 01:39:42.879 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:39:42.879754 | orchestrator | 01:39:42.879 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:39:42.879788 | orchestrator | 01:39:42.879 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:39:42.879823 | orchestrator | 01:39:42.879 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:39:42.879867 | orchestrator | 01:39:42.879 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.879903 | orchestrator | 01:39:42.879 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:39:42.879939 | orchestrator | 01:39:42.879 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:39:42.879974 | orchestrator | 01:39:42.879 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:39:42.880010 | orchestrator | 01:39:42.879 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:39:42.880047 | orchestrator | 01:39:42.880 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.880082 | orchestrator | 01:39:42.880 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:39:42.880131 | orchestrator | 01:39:42.880 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.880155 | orchestrator | 01:39:42.880 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.880201 | orchestrator | 01:39:42.880 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:39:42.880211 | orchestrator | 01:39:42.880 STDOUT terraform:  } 2025-05-14 01:39:42.880225 | orchestrator | 01:39:42.880 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.880262 | orchestrator | 01:39:42.880 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:39:42.880272 | orchestrator | 01:39:42.880 STDOUT terraform:  } 2025-05-14 01:39:42.880281 | orchestrator | 01:39:42.880 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.880319 | orchestrator | 01:39:42.880 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:39:42.880329 | orchestrator | 01:39:42.880 STDOUT terraform:  } 2025-05-14 01:39:42.880337 | orchestrator | 01:39:42.880 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:39:42.880375 | orchestrator | 01:39:42.880 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:39:42.880385 | orchestrator | 01:39:42.880 STDOUT terraform:  } 2025-05-14 01:39:42.880408 | orchestrator | 01:39:42.880 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:39:42.880418 | orchestrator | 01:39:42.880 STDOUT terraform:  + fixed_ip { 2025-05-14 01:39:42.880441 | orchestrator | 01:39:42.880 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-14 01:39:42.880465 | orchestrator | 01:39:42.880 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:39:42.880474 | orchestrator | 01:39:42.880 STDOUT terraform:  } 2025-05-14 01:39:42.880483 | orchestrator | 01:39:42.880 STDOUT terraform:  } 2025-05-14 01:39:42.880537 | orchestrator | 01:39:42.880 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-14 01:39:42.880585 | orchestrator | 01:39:42.880 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-14 01:39:42.880596 | orchestrator | 01:39:42.880 STDOUT terraform:  + force_destroy = false 2025-05-14 01:39:42.880633 | orchestrator | 01:39:42.880 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.880657 | orchestrator | 01:39:42.880 STDOUT terraform:  + port_id = (known after apply) 2025-05-14 01:39:42.880688 | orchestrator | 01:39:42.880 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.880712 | orchestrator | 01:39:42.880 STDOUT terraform:  + router_id = (known after apply) 2025-05-14 01:39:42.880742 | orchestrator | 01:39:42.880 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:39:42.880750 | orchestrator | 01:39:42.880 STDOUT terraform:  } 2025-05-14 01:39:42.880787 | orchestrator | 01:39:42.880 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-14 01:39:42.880824 | orchestrator | 01:39:42.880 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-14 01:39:42.880864 | orchestrator | 01:39:42.880 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:39:42.880897 | orchestrator | 01:39:42.880 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.880928 | orchestrator | 01:39:42.880 STDOUT terraform:  + availability_zone_hints = [ 2025-05-14 01:39:42.880936 | orchestrator | 01:39:42.880 STDOUT terraform:  + "nova", 2025-05-14 01:39:42.880944 | orchestrator | 01:39:42.880 STDOUT terraform:  ] 2025-05-14 01:39:42.880981 | orchestrator | 01:39:42.880 STDOUT terraform:  + distributed = (known after apply) 2025-05-14 01:39:42.881016 | orchestrator | 01:39:42.880 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-14 01:39:42.881066 | orchestrator | 01:39:42.881 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-14 01:39:42.881105 | orchestrator | 01:39:42.881 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.881155 | orchestrator | 01:39:42.881 STDOUT terraform:  + name = "testbed" 2025-05-14 01:39:42.881223 | orchestrator | 01:39:42.881 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.881259 | orchestrator | 01:39:42.881 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.881289 | orchestrator | 01:39:42.881 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-14 01:39:42.881297 | orchestrator | 01:39:42.881 STDOUT terraform:  } 2025-05-14 01:39:42.881353 | orchestrator | 01:39:42.881 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-14 01:39:42.881406 | orchestrator | 01:39:42.881 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-14 01:39:42.881415 | orchestrator | 01:39:42.881 STDOUT terraform:  + description = "ssh" 2025-05-14 01:39:42.881448 | orchestrator | 01:39:42.881 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:39:42.881457 | orchestrator | 01:39:42.881 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:39:42.881496 | orchestrator | 01:39:42.881 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.881505 | orchestrator | 01:39:42.881 STDOUT terraform:  + port_range_max = 22 2025-05-14 01:39:42.881535 | orchestrator | 01:39:42.881 STDOUT terraform:  + port_range_min = 22 2025-05-14 01:39:42.881549 | orchestrator | 01:39:42.881 STDOUT terraform:  + protocol = "tcp" 2025-05-14 01:39:42.881582 | orchestrator | 01:39:42.881 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.881613 | orchestrator | 01:39:42.881 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:39:42.881635 | orchestrator | 01:39:42.881 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:39:42.881666 | orchestrator | 01:39:42.881 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:39:42.881697 | orchestrator | 01:39:42.881 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.881708 | orchestrator | 01:39:42.881 STDOUT terraform:  } 2025-05-14 01:39:42.881761 | orchestrator | 01:39:42.881 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-14 01:39:42.881815 | orchestrator | 01:39:42.881 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-14 01:39:42.881837 | orchestrator | 01:39:42.881 STDOUT terraform:  + description = "wireguard" 2025-05-14 01:39:42.881858 | orchestrator | 01:39:42.881 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:39:42.881879 | orchestrator | 01:39:42.881 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:39:42.881909 | orchestrator | 01:39:42.881 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.881917 | orchestrator | 01:39:42.881 STDOUT terraform:  + port_range_max = 51820 2025-05-14 01:39:42.881945 | orchestrator | 01:39:42.881 STDOUT terraform:  + port_range_min = 51820 2025-05-14 01:39:42.881954 | orchestrator | 01:39:42.881 STDOUT terraform:  + protocol = "udp" 2025-05-14 01:39:42.881995 | orchestrator | 01:39:42.881 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.882024 | orchestrator | 01:39:42.881 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:39:42.882074 | orchestrator | 01:39:42.882 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:39:42.882110 | orchestrator | 01:39:42.882 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:39:42.882173 | orchestrator | 01:39:42.882 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.882184 | orchestrator | 01:39:42.882 STDOUT terraform:  } 2025-05-14 01:39:42.882241 | orchestrator | 01:39:42.882 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-14 01:39:42.882295 | orchestrator | 01:39:42.882 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-14 01:39:42.882320 | orchestrator | 01:39:42.882 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:39:42.882341 | orchestrator | 01:39:42.882 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:39:42.882372 | orchestrator | 01:39:42.882 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.882393 | orchestrator | 01:39:42.882 STDOUT terraform:  + protocol = "tcp" 2025-05-14 01:39:42.882425 | orchestrator | 01:39:42.882 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.882456 | orchestrator | 01:39:42.882 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:39:42.882486 | orchestrator | 01:39:42.882 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-14 01:39:42.882516 | orchestrator | 01:39:42.882 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:39:42.882547 | orchestrator | 01:39:42.882 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.882555 | orchestrator | 01:39:42.882 STDOUT terraform:  } 2025-05-14 01:39:42.882609 | orchestrator | 01:39:42.882 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-14 01:39:42.882662 | orchestrator | 01:39:42.882 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-14 01:39:42.882687 | orchestrator | 01:39:42.882 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:39:42.882708 | orchestrator | 01:39:42.882 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:39:42.882762 | orchestrator | 01:39:42.882 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.882774 | orchestrator | 01:39:42.882 STDOUT terraform:  + protocol = "udp" 2025-05-14 01:39:42.882781 | orchestrator | 01:39:42.882 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.882816 | orchestrator | 01:39:42.882 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:39:42.882845 | orchestrator | 01:39:42.882 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-14 01:39:42.882876 | orchestrator | 01:39:42.882 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:39:42.882936 | orchestrator | 01:39:42.882 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.882945 | orchestrator | 01:39:42.882 STDOUT terraform:  } 2025-05-14 01:39:42.883003 | orchestrator | 01:39:42.882 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-14 01:39:42.883055 | orchestrator | 01:39:42.882 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-14 01:39:42.883077 | orchestrator | 01:39:42.883 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:39:42.883086 | orchestrator | 01:39:42.883 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:39:42.883142 | orchestrator | 01:39:42.883 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.883193 | orchestrator | 01:39:42.883 STDOUT terraform:  + protocol = "icmp" 2025-05-14 01:39:42.883232 | orchestrator | 01:39:42.883 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.883272 | orchestrator | 01:39:42.883 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:39:42.883316 | orchestrator | 01:39:42.883 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:39:42.883352 | orchestrator | 01:39:42.883 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:39:42.883397 | orchestrator | 01:39:42.883 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.883425 | orchestrator | 01:39:42.883 STDOUT terraform:  } 2025-05-14 01:39:42.883498 | orchestrator | 01:39:42.883 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-14 01:39:42.883580 | orchestrator | 01:39:42.883 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-14 01:39:42.883624 | orchestrator | 01:39:42.883 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:39:42.883660 | orchestrator | 01:39:42.883 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:39:42.883694 | orchestrator | 01:39:42.883 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.883716 | orchestrator | 01:39:42.883 STDOUT terraform:  + protocol = "tcp" 2025-05-14 01:39:42.883748 | orchestrator | 01:39:42.883 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.883778 | orchestrator | 01:39:42.883 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:39:42.883808 | orchestrator | 01:39:42.883 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:39:42.883851 | orchestrator | 01:39:42.883 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:39:42.883895 | orchestrator | 01:39:42.883 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.883904 | orchestrator | 01:39:42.883 STDOUT terraform:  } 2025-05-14 01:39:42.883992 | orchestrator | 01:39:42.883 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-14 01:39:42.884078 | orchestrator | 01:39:42.883 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-14 01:39:42.884096 | orchestrator | 01:39:42.884 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:39:42.884136 | orchestrator | 01:39:42.884 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:39:42.884262 | orchestrator | 01:39:42.884 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.884298 | orchestrator | 01:39:42.884 STDOUT terraform:  + protocol = "udp" 2025-05-14 01:39:42.884331 | orchestrator | 01:39:42.884 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.884363 | orchestrator | 01:39:42.884 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:39:42.884384 | orchestrator | 01:39:42.884 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:39:42.884418 | orchestrator | 01:39:42.884 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:39:42.884449 | orchestrator | 01:39:42.884 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.884457 | orchestrator | 01:39:42.884 STDOUT terraform:  } 2025-05-14 01:39:42.884510 | orchestrator | 01:39:42.884 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-14 01:39:42.884560 | orchestrator | 01:39:42.884 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-14 01:39:42.884582 | orchestrator | 01:39:42.884 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:39:42.884591 | orchestrator | 01:39:42.884 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:39:42.884629 | orchestrator | 01:39:42.884 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.884657 | orchestrator | 01:39:42.884 STDOUT terraform:  + protocol = "icmp" 2025-05-14 01:39:42.884673 | orchestrator | 01:39:42.884 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.884715 | orchestrator | 01:39:42.884 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:39:42.884724 | orchestrator | 01:39:42.884 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:39:42.884763 | orchestrator | 01:39:42.884 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:39:42.884793 | orchestrator | 01:39:42.884 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.884802 | orchestrator | 01:39:42.884 STDOUT terraform:  } 2025-05-14 01:39:42.884854 | orchestrator | 01:39:42.884 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-14 01:39:42.884903 | orchestrator | 01:39:42.884 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-14 01:39:42.884925 | orchestrator | 01:39:42.884 STDOUT terraform:  + description = "vrrp" 2025-05-14 01:39:42.884960 | orchestrator | 01:39:42.884 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:39:42.884990 | orchestrator | 01:39:42.884 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:39:42.885042 | orchestrator | 01:39:42.884 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.885073 | orchestrator | 01:39:42.885 STDOUT terraform:  + protocol = "112" 2025-05-14 01:39:42.885116 | orchestrator | 01:39:42.885 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.885206 | orchestrator | 01:39:42.885 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:39:42.885217 | orchestrator | 01:39:42.885 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:39:42.885252 | orchestrator | 01:39:42.885 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:39:42.885283 | orchestrator | 01:39:42.885 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.885291 | orchestrator | 01:39:42.885 STDOUT terraform:  } 2025-05-14 01:39:42.885345 | orchestrator | 01:39:42.885 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-14 01:39:42.885393 | orchestrator | 01:39:42.885 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-14 01:39:42.885424 | orchestrator | 01:39:42.885 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.885460 | orchestrator | 01:39:42.885 STDOUT terraform:  + description = "management security group" 2025-05-14 01:39:42.885489 | orchestrator | 01:39:42.885 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.885519 | orchestrator | 01:39:42.885 STDOUT terraform:  + name = "testbed-management" 2025-05-14 01:39:42.885547 | orchestrator | 01:39:42.885 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.885579 | orchestrator | 01:39:42.885 STDOUT terraform:  + stateful = (known after apply) 2025-05-14 01:39:42.885622 | orchestrator | 01:39:42.885 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.885647 | orchestrator | 01:39:42.885 STDOUT terraform:  } 2025-05-14 01:39:42.885697 | orchestrator | 01:39:42.885 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-14 01:39:42.885746 | orchestrator | 01:39:42.885 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-14 01:39:42.885775 | orchestrator | 01:39:42.885 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.885805 | orchestrator | 01:39:42.885 STDOUT terraform:  + description = "node security group" 2025-05-14 01:39:42.885834 | orchestrator | 01:39:42.885 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.885860 | orchestrator | 01:39:42.885 STDOUT terraform:  + name = "testbed-node" 2025-05-14 01:39:42.885889 | orchestrator | 01:39:42.885 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.885919 | orchestrator | 01:39:42.885 STDOUT terraform:  + stateful = (known after apply) 2025-05-14 01:39:42.885948 | orchestrator | 01:39:42.885 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.885956 | orchestrator | 01:39:42.885 STDOUT terraform:  } 2025-05-14 01:39:42.886034 | orchestrator | 01:39:42.885 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-14 01:39:42.886106 | orchestrator | 01:39:42.886 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-14 01:39:42.886173 | orchestrator | 01:39:42.886 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:39:42.886223 | orchestrator | 01:39:42.886 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-14 01:39:42.886261 | orchestrator | 01:39:42.886 STDOUT terraform:  + dns_nameservers = [ 2025-05-14 01:39:42.886290 | orchestrator | 01:39:42.886 STDOUT terraform:  + "8.8.8.8", 2025-05-14 01:39:42.886320 | orchestrator | 01:39:42.886 STDOUT terraform:  + "9.9.9.9", 2025-05-14 01:39:42.886340 | orchestrator | 01:39:42.886 STDOUT terraform:  ] 2025-05-14 01:39:42.886379 | orchestrator | 01:39:42.886 STDOUT terraform:  + enable_dhcp = true 2025-05-14 01:39:42.886426 | orchestrator | 01:39:42.886 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-14 01:39:42.886475 | orchestrator | 01:39:42.886 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.886509 | orchestrator | 01:39:42.886 STDOUT terraform:  + ip_version = 4 2025-05-14 01:39:42.886553 | orchestrator | 01:39:42.886 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-14 01:39:42.886599 | orchestrator | 01:39:42.886 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-14 01:39:42.886654 | orchestrator | 01:39:42.886 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-14 01:39:42.886702 | orchestrator | 01:39:42.886 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:39:42.886737 | orchestrator | 01:39:42.886 STDOUT terraform:  + no_gateway = false 2025-05-14 01:39:42.886784 | orchestrator | 01:39:42.886 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:39:42.886833 | orchestrator | 01:39:42.886 STDOUT terraform:  + service_types = (known after apply) 2025-05-14 01:39:42.886881 | orchestrator | 01:39:42.886 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:39:42.886913 | orchestrator | 01:39:42.886 STDOUT terraform:  + allocation_pool { 2025-05-14 01:39:42.886957 | orchestrator | 01:39:42.886 STDOUT terraform:  + end = "192.168.31.250" 2025-05-14 01:39:42.887005 | orchestrator | 01:39:42.886 STDOUT terraform:  + start = "192.168.31.200" 2025-05-14 01:39:42.887017 | orchestrator | 01:39:42.886 STDOUT terraform:  } 2025-05-14 01:39:42.887052 | orchestrator | 01:39:42.887 STDOUT terraform:  } 2025-05-14 01:39:42.887096 | orchestrator | 01:39:42.887 STDOUT terraform:  # terraform_data.image will be created 2025-05-14 01:39:42.887134 | orchestrator | 01:39:42.887 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-14 01:39:42.887224 | orchestrator | 01:39:42.887 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.887265 | orchestrator | 01:39:42.887 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-14 01:39:42.887317 | orchestrator | 01:39:42.887 STDOUT terraform:  + output = (known after apply) 2025-05-14 01:39:42.887336 | orchestrator | 01:39:42.887 STDOUT terraform:  } 2025-05-14 01:39:42.887394 | orchestrator | 01:39:42.887 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-14 01:39:42.887449 | orchestrator | 01:39:42.887 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-14 01:39:42.887496 | orchestrator | 01:39:42.887 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:39:42.887537 | orchestrator | 01:39:42.887 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-14 01:39:42.887582 | orchestrator | 01:39:42.887 STDOUT terraform:  + output = (known after apply) 2025-05-14 01:39:42.887602 | orchestrator | 01:39:42.887 STDOUT terraform:  } 2025-05-14 01:39:42.887645 | orchestrator | 01:39:42.887 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-14 01:39:42.887664 | orchestrator | 01:39:42.887 STDOUT terraform: Changes to Outputs: 2025-05-14 01:39:42.887697 | orchestrator | 01:39:42.887 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-14 01:39:42.887734 | orchestrator | 01:39:42.887 STDOUT terraform:  + private_key = (sensitive value) 2025-05-14 01:39:43.088416 | orchestrator | 01:39:43.088 STDOUT terraform: terraform_data.image: Creating... 2025-05-14 01:39:43.088487 | orchestrator | 01:39:43.088 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-14 01:39:43.088972 | orchestrator | 01:39:43.088 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=6d34fd91-d1fd-c1bd-0fb6-497264f468e6] 2025-05-14 01:39:43.090657 | orchestrator | 01:39:43.090 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=585cc011-166e-bfa8-14b8-aec9c1c8e365] 2025-05-14 01:39:43.110137 | orchestrator | 01:39:43.108 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-14 01:39:43.110521 | orchestrator | 01:39:43.110 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-14 01:39:43.113202 | orchestrator | 01:39:43.113 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-14 01:39:43.113543 | orchestrator | 01:39:43.113 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-14 01:39:43.114604 | orchestrator | 01:39:43.114 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-14 01:39:43.115760 | orchestrator | 01:39:43.115 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-14 01:39:43.117291 | orchestrator | 01:39:43.117 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-14 01:39:43.117768 | orchestrator | 01:39:43.117 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-14 01:39:43.124566 | orchestrator | 01:39:43.124 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-14 01:39:43.130442 | orchestrator | 01:39:43.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-14 01:39:43.621303 | orchestrator | 01:39:43.620 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-14 01:39:43.629460 | orchestrator | 01:39:43.629 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-14 01:39:43.631300 | orchestrator | 01:39:43.631 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-14 01:39:43.639184 | orchestrator | 01:39:43.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-14 01:39:50.019728 | orchestrator | 01:39:50.019 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 7s [id=8795bf94-8dc4-4804-a508-36c53a3e7459] 2025-05-14 01:39:50.024570 | orchestrator | 01:39:50.024 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-14 01:39:50.415723 | orchestrator | 01:39:50.415 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-05-14 01:39:50.422580 | orchestrator | 01:39:50.422 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-14 01:39:53.114605 | orchestrator | 01:39:53.114 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-14 01:39:53.115865 | orchestrator | 01:39:53.115 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-14 01:39:53.119262 | orchestrator | 01:39:53.119 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-14 01:39:53.119313 | orchestrator | 01:39:53.119 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-14 01:39:53.119405 | orchestrator | 01:39:53.119 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-14 01:39:53.126379 | orchestrator | 01:39:53.126 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-14 01:39:53.130727 | orchestrator | 01:39:53.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-14 01:39:53.629804 | orchestrator | 01:39:53.629 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-14 01:39:53.639250 | orchestrator | 01:39:53.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-14 01:39:53.706346 | orchestrator | 01:39:53.705 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=9158ba9c-f661-457a-83a0-7301d2e715e9] 2025-05-14 01:39:53.712683 | orchestrator | 01:39:53.712 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-14 01:39:53.720452 | orchestrator | 01:39:53.720 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=4bf8951c-ead1-422f-8e98-563fd238f873] 2025-05-14 01:39:53.726668 | orchestrator | 01:39:53.726 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-14 01:39:53.745321 | orchestrator | 01:39:53.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=276d5307-5ea7-4279-8794-03223ea8507b] 2025-05-14 01:39:53.752806 | orchestrator | 01:39:53.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-14 01:39:53.761413 | orchestrator | 01:39:53.761 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=2fe9822d-742a-4109-b2fd-4f62bd011e9b] 2025-05-14 01:39:53.771424 | orchestrator | 01:39:53.771 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=7d716f79-cf1d-4cd5-9251-d30dd616fe8c] 2025-05-14 01:39:53.775842 | orchestrator | 01:39:53.775 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=6c9e420d-0c60-4ebc-ac19-f905b2b7a82f] 2025-05-14 01:39:53.775927 | orchestrator | 01:39:53.775 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-14 01:39:53.778417 | orchestrator | 01:39:53.778 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-14 01:39:53.785394 | orchestrator | 01:39:53.785 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-14 01:39:53.797303 | orchestrator | 01:39:53.796 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=07a08b1a-3bd9-437e-a737-9a0e3fc440bf] 2025-05-14 01:39:53.802160 | orchestrator | 01:39:53.801 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-14 01:39:53.824902 | orchestrator | 01:39:53.824 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=e31a2ff7-84d9-48c9-b0e1-1526f23b46b1] 2025-05-14 01:39:53.839785 | orchestrator | 01:39:53.839 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-14 01:39:53.844412 | orchestrator | 01:39:53.844 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=fa47bcc042b49433bee79a2b42595e5329b02dbb] 2025-05-14 01:39:53.856765 | orchestrator | 01:39:53.856 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=7c39c8ea-7878-4e89-b4ec-61bbe868aea7] 2025-05-14 01:39:53.859081 | orchestrator | 01:39:53.858 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-14 01:39:53.866602 | orchestrator | 01:39:53.866 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=f85b43d2ec1367cedeea68fc45ece23ccf42b35e] 2025-05-14 01:39:59.457798 | orchestrator | 01:39:59.457 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 5s [id=51fe0767-6f15-4ad3-a6a0-137e67f38ce1] 2025-05-14 01:39:59.466221 | orchestrator | 01:39:59.465 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-14 01:40:00.423391 | orchestrator | 01:40:00.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-14 01:40:00.763991 | orchestrator | 01:40:00.763 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=d343cbf4-64a5-4d74-aedc-ee3edf681b53] 2025-05-14 01:40:03.713791 | orchestrator | 01:40:03.713 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-14 01:40:03.728833 | orchestrator | 01:40:03.728 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-14 01:40:03.756244 | orchestrator | 01:40:03.756 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-14 01:40:03.776642 | orchestrator | 01:40:03.776 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-14 01:40:03.779780 | orchestrator | 01:40:03.779 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-14 01:40:03.785991 | orchestrator | 01:40:03.785 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-14 01:40:04.065336 | orchestrator | 01:40:04.065 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=f8bbaec8-a392-4161-9c9a-fade409c1694] 2025-05-14 01:40:04.121258 | orchestrator | 01:40:04.120 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=1e4d6019-cfa5-4932-b542-f7abf313e9f1] 2025-05-14 01:40:04.140410 | orchestrator | 01:40:04.140 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=5815f41e-a950-4348-941c-f26c72002134] 2025-05-14 01:40:04.163189 | orchestrator | 01:40:04.162 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=8a52c913-9b08-49a5-b109-def0ab7dcd30] 2025-05-14 01:40:04.175697 | orchestrator | 01:40:04.175 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=52375a6f-eba6-4d12-851a-4fdfc6d8b008] 2025-05-14 01:40:04.183474 | orchestrator | 01:40:04.183 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=4b4844e9-36f4-43ee-94f9-25fe1d60740b] 2025-05-14 01:40:06.027064 | orchestrator | 01:40:06.026 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=7e786755-86dc-4e9a-8118-74a72ef05fd6] 2025-05-14 01:40:06.035672 | orchestrator | 01:40:06.035 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-14 01:40:06.039257 | orchestrator | 01:40:06.038 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-14 01:40:06.039357 | orchestrator | 01:40:06.038 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-14 01:40:06.162961 | orchestrator | 01:40:06.162 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=28cf8d37-e82b-4f0f-ae3a-67a9d35380ab] 2025-05-14 01:40:06.173936 | orchestrator | 01:40:06.173 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=125e18cd-8edb-4c7a-95a1-f7e29a791725] 2025-05-14 01:40:06.177336 | orchestrator | 01:40:06.176 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-14 01:40:06.179746 | orchestrator | 01:40:06.179 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-14 01:40:06.183535 | orchestrator | 01:40:06.183 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-14 01:40:06.190527 | orchestrator | 01:40:06.190 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-14 01:40:06.196353 | orchestrator | 01:40:06.196 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-14 01:40:06.196420 | orchestrator | 01:40:06.196 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-14 01:40:06.197442 | orchestrator | 01:40:06.197 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-14 01:40:06.201439 | orchestrator | 01:40:06.201 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-14 01:40:06.202141 | orchestrator | 01:40:06.201 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-14 01:40:06.303053 | orchestrator | 01:40:06.302 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=bbe9bd87-89f3-4b35-b435-6e83e4675fdf] 2025-05-14 01:40:06.310967 | orchestrator | 01:40:06.310 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-14 01:40:06.475515 | orchestrator | 01:40:06.475 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=0092cf52-cb8c-4a89-8b02-5a79a9c9f908] 2025-05-14 01:40:06.482849 | orchestrator | 01:40:06.482 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-14 01:40:06.599190 | orchestrator | 01:40:06.598 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=99ec07a6-d1da-4164-86ec-89584ad9b2f3] 2025-05-14 01:40:06.606302 | orchestrator | 01:40:06.606 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-14 01:40:06.685682 | orchestrator | 01:40:06.685 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=efbb62c8-5f8b-4dee-999b-3f4b5fe0e02a] 2025-05-14 01:40:06.692166 | orchestrator | 01:40:06.691 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-14 01:40:06.723304 | orchestrator | 01:40:06.722 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=3ec291ba-e227-4382-af74-37bc51433321] 2025-05-14 01:40:06.729096 | orchestrator | 01:40:06.728 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-14 01:40:06.810602 | orchestrator | 01:40:06.810 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=dd5166c1-ded5-4a82-a191-843c493c2491] 2025-05-14 01:40:06.817271 | orchestrator | 01:40:06.816 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-14 01:40:06.945000 | orchestrator | 01:40:06.944 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=38dcd091-b8fe-4cdb-8a78-65ab38158a06] 2025-05-14 01:40:06.957888 | orchestrator | 01:40:06.957 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-14 01:40:07.068565 | orchestrator | 01:40:07.068 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=259cfb59-f3bb-4fd6-b9df-d8486f7bb625] 2025-05-14 01:40:07.194058 | orchestrator | 01:40:07.193 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=a61bc91a-668c-4c84-835e-409d07650f65] 2025-05-14 01:40:11.859402 | orchestrator | 01:40:11.858 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=f2a02b92-a4ab-485b-855a-f31ab136e1ae] 2025-05-14 01:40:11.945410 | orchestrator | 01:40:11.945 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=9ddf52f4-9262-4fc7-b417-b54182336a82] 2025-05-14 01:40:11.956974 | orchestrator | 01:40:11.956 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=2e85b7d6-9d24-444b-be1b-4bbfc53549bb] 2025-05-14 01:40:12.067901 | orchestrator | 01:40:12.067 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=e9c6248d-bfd6-4ea3-ba7f-0b22ee2b2865] 2025-05-14 01:40:12.119823 | orchestrator | 01:40:12.119 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=7817fa54-20fc-469a-a791-94d2b89eadc3] 2025-05-14 01:40:12.237472 | orchestrator | 01:40:12.237 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=6836d4d2-c44a-4563-9160-e5ae9c467c9d] 2025-05-14 01:40:13.204175 | orchestrator | 01:40:13.203 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=c588f8a8-f7d4-4e3d-aad2-5738a1ce04ed] 2025-05-14 01:40:13.381904 | orchestrator | 01:40:13.381 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=4091f0d4-f965-438c-bfcc-0c0539bba654] 2025-05-14 01:40:13.408148 | orchestrator | 01:40:13.407 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-14 01:40:13.417201 | orchestrator | 01:40:13.416 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-14 01:40:13.439164 | orchestrator | 01:40:13.439 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-14 01:40:13.446993 | orchestrator | 01:40:13.446 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-14 01:40:13.451506 | orchestrator | 01:40:13.451 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-14 01:40:13.455792 | orchestrator | 01:40:13.455 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-14 01:40:13.459491 | orchestrator | 01:40:13.459 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-14 01:40:21.645639 | orchestrator | 01:40:21.643 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 9s [id=5099119f-58cc-4206-90ba-5bcc7d872a4e] 2025-05-14 01:40:21.659558 | orchestrator | 01:40:21.659 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-14 01:40:21.659640 | orchestrator | 01:40:21.659 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-14 01:40:21.659719 | orchestrator | 01:40:21.659 STDOUT terraform: local_file.inventory: Creating... 2025-05-14 01:40:21.662173 | orchestrator | 01:40:21.661 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=e60a2f1e64a6363f295cd34a3ab1717d8e145d1c] 2025-05-14 01:40:21.667708 | orchestrator | 01:40:21.667 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=793a9f130468d46e0f79acbf1e75aba0a5dd4cde] 2025-05-14 01:40:23.420590 | orchestrator | 01:40:23.420 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-14 01:40:23.439919 | orchestrator | 01:40:23.439 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-14 01:40:23.456352 | orchestrator | 01:40:23.455 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-14 01:40:23.456454 | orchestrator | 01:40:23.456 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-14 01:40:23.463415 | orchestrator | 01:40:23.463 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-14 01:40:23.464581 | orchestrator | 01:40:23.464 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-14 01:40:23.467297 | orchestrator | 01:40:23.466 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=5099119f-58cc-4206-90ba-5bcc7d872a4e] 2025-05-14 01:40:33.420969 | orchestrator | 01:40:33.420 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-14 01:40:33.441037 | orchestrator | 01:40:33.440 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-14 01:40:33.457417 | orchestrator | 01:40:33.457 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-14 01:40:33.457481 | orchestrator | 01:40:33.457 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-14 01:40:33.464823 | orchestrator | 01:40:33.464 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-14 01:40:33.464887 | orchestrator | 01:40:33.464 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-14 01:40:34.011818 | orchestrator | 01:40:34.011 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=322b183e-a2c8-429e-8d6a-7232d89b3025] 2025-05-14 01:40:34.055787 | orchestrator | 01:40:34.055 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=92f16722-5635-4275-a3fd-7e96e703649f] 2025-05-14 01:40:34.112089 | orchestrator | 01:40:34.111 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=75198af6-f961-4184-9c0d-3066647c6ed1] 2025-05-14 01:40:43.422534 | orchestrator | 01:40:43.421 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-14 01:40:43.465180 | orchestrator | 01:40:43.464 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-14 01:40:43.465322 | orchestrator | 01:40:43.464 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-14 01:40:43.875185 | orchestrator | 01:40:43.874 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=2be01343-2401-47f6-92d7-3f3f15f99001] 2025-05-14 01:40:43.992883 | orchestrator | 01:40:43.992 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=5cf8b38e-479f-49a6-8602-cf7c1551b76b] 2025-05-14 01:40:44.276945 | orchestrator | 01:40:44.276 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=66409d2c-7d1e-49de-807d-fc0acbb21d55] 2025-05-14 01:40:44.294863 | orchestrator | 01:40:44.294 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-14 01:40:44.303773 | orchestrator | 01:40:44.303 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3811414338204436979] 2025-05-14 01:40:44.304109 | orchestrator | 01:40:44.304 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-14 01:40:44.308468 | orchestrator | 01:40:44.308 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-14 01:40:44.309542 | orchestrator | 01:40:44.309 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-14 01:40:44.314353 | orchestrator | 01:40:44.314 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-14 01:40:44.329697 | orchestrator | 01:40:44.329 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-14 01:40:44.331217 | orchestrator | 01:40:44.331 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-14 01:40:44.332052 | orchestrator | 01:40:44.331 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-14 01:40:44.333304 | orchestrator | 01:40:44.333 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-14 01:40:44.335642 | orchestrator | 01:40:44.335 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-14 01:40:44.352839 | orchestrator | 01:40:44.352 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-14 01:40:49.642574 | orchestrator | 01:40:49.642 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=322b183e-a2c8-429e-8d6a-7232d89b3025/07a08b1a-3bd9-437e-a737-9a0e3fc440bf] 2025-05-14 01:40:49.644718 | orchestrator | 01:40:49.644 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=92f16722-5635-4275-a3fd-7e96e703649f/9158ba9c-f661-457a-83a0-7301d2e715e9] 2025-05-14 01:40:49.669495 | orchestrator | 01:40:49.668 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=66409d2c-7d1e-49de-807d-fc0acbb21d55/e31a2ff7-84d9-48c9-b0e1-1526f23b46b1] 2025-05-14 01:40:49.672395 | orchestrator | 01:40:49.672 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=322b183e-a2c8-429e-8d6a-7232d89b3025/276d5307-5ea7-4279-8794-03223ea8507b] 2025-05-14 01:40:49.707711 | orchestrator | 01:40:49.707 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=66409d2c-7d1e-49de-807d-fc0acbb21d55/7c39c8ea-7878-4e89-b4ec-61bbe868aea7] 2025-05-14 01:40:49.707879 | orchestrator | 01:40:49.707 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=92f16722-5635-4275-a3fd-7e96e703649f/4bf8951c-ead1-422f-8e98-563fd238f873] 2025-05-14 01:40:49.709576 | orchestrator | 01:40:49.709 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=322b183e-a2c8-429e-8d6a-7232d89b3025/7d716f79-cf1d-4cd5-9251-d30dd616fe8c] 2025-05-14 01:40:49.725982 | orchestrator | 01:40:49.725 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=92f16722-5635-4275-a3fd-7e96e703649f/2fe9822d-742a-4109-b2fd-4f62bd011e9b] 2025-05-14 01:40:49.733842 | orchestrator | 01:40:49.733 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=66409d2c-7d1e-49de-807d-fc0acbb21d55/6c9e420d-0c60-4ebc-ac19-f905b2b7a82f] 2025-05-14 01:40:54.346085 | orchestrator | 01:40:54.345 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-14 01:41:04.347624 | orchestrator | 01:41:04.347 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-14 01:41:04.854648 | orchestrator | 01:41:04.854 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=194906f4-7628-46d3-a0b8-638b6437ab3a] 2025-05-14 01:41:04.876949 | orchestrator | 01:41:04.876 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-14 01:41:04.877015 | orchestrator | 01:41:04.876 STDOUT terraform: Outputs: 2025-05-14 01:41:04.877028 | orchestrator | 01:41:04.876 STDOUT terraform: manager_address = 2025-05-14 01:41:04.877041 | orchestrator | 01:41:04.876 STDOUT terraform: private_key = 2025-05-14 01:41:05.229523 | orchestrator | ok: Runtime: 0:01:32.309803 2025-05-14 01:41:05.266934 | 2025-05-14 01:41:05.267054 | TASK [Fetch manager address] 2025-05-14 01:41:05.714175 | orchestrator | ok 2025-05-14 01:41:05.725433 | 2025-05-14 01:41:05.725575 | TASK [Set manager_host address] 2025-05-14 01:41:05.805896 | orchestrator | ok 2025-05-14 01:41:05.817657 | 2025-05-14 01:41:05.817798 | LOOP [Update ansible collections] 2025-05-14 01:41:06.609856 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-14 01:41:06.610208 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 01:41:06.610261 | orchestrator | Starting galaxy collection install process 2025-05-14 01:41:06.610298 | orchestrator | Process install dependency map 2025-05-14 01:41:06.610330 | orchestrator | Starting collection install process 2025-05-14 01:41:06.610379 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-05-14 01:41:06.610416 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-05-14 01:41:06.610451 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-14 01:41:06.610517 | orchestrator | ok: Item: commons Runtime: 0:00:00.480025 2025-05-14 01:41:07.492018 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 01:41:07.492282 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-14 01:41:07.492343 | orchestrator | Starting galaxy collection install process 2025-05-14 01:41:07.492442 | orchestrator | Process install dependency map 2025-05-14 01:41:07.492482 | orchestrator | Starting collection install process 2025-05-14 01:41:07.492518 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-05-14 01:41:07.492554 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-05-14 01:41:07.492588 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-14 01:41:07.492645 | orchestrator | ok: Item: services Runtime: 0:00:00.625023 2025-05-14 01:41:07.515912 | 2025-05-14 01:41:07.516109 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-14 01:41:19.173858 | orchestrator | ok 2025-05-14 01:41:19.184945 | 2025-05-14 01:41:19.185070 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-14 01:42:19.235867 | orchestrator | ok 2025-05-14 01:42:19.246323 | 2025-05-14 01:42:19.246438 | TASK [Fetch manager ssh hostkey] 2025-05-14 01:42:20.824445 | orchestrator | Output suppressed because no_log was given 2025-05-14 01:42:20.839330 | 2025-05-14 01:42:20.839533 | TASK [Get ssh keypair from terraform environment] 2025-05-14 01:42:21.375482 | orchestrator | ok: Runtime: 0:00:00.008973 2025-05-14 01:42:21.393133 | 2025-05-14 01:42:21.393356 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-14 01:42:21.433234 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-14 01:42:21.441946 | 2025-05-14 01:42:21.442062 | TASK [Run manager part 0] 2025-05-14 01:42:22.340648 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 01:42:22.379442 | orchestrator | 2025-05-14 01:42:22.379498 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-14 01:42:22.379504 | orchestrator | 2025-05-14 01:42:22.379516 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-14 01:42:24.274757 | orchestrator | ok: [testbed-manager] 2025-05-14 01:42:24.274825 | orchestrator | 2025-05-14 01:42:24.274853 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-14 01:42:24.274864 | orchestrator | 2025-05-14 01:42:24.274875 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:42:26.369539 | orchestrator | ok: [testbed-manager] 2025-05-14 01:42:26.370239 | orchestrator | 2025-05-14 01:42:26.370344 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-14 01:42:27.099344 | orchestrator | ok: [testbed-manager] 2025-05-14 01:42:27.099425 | orchestrator | 2025-05-14 01:42:27.099438 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-14 01:42:27.149242 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:42:27.149307 | orchestrator | 2025-05-14 01:42:27.149317 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-14 01:42:27.183606 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:42:27.183663 | orchestrator | 2025-05-14 01:42:27.183671 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-14 01:42:27.213630 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:42:27.213676 | orchestrator | 2025-05-14 01:42:27.213681 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-14 01:42:27.242349 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:42:27.242389 | orchestrator | 2025-05-14 01:42:27.242394 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-14 01:42:27.278834 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:42:27.278901 | orchestrator | 2025-05-14 01:42:27.278911 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-14 01:42:27.323749 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:42:27.323808 | orchestrator | 2025-05-14 01:42:27.323817 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-14 01:42:27.360350 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:42:27.360414 | orchestrator | 2025-05-14 01:42:27.360422 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-14 01:42:28.234631 | orchestrator | changed: [testbed-manager] 2025-05-14 01:42:28.234696 | orchestrator | 2025-05-14 01:42:28.234706 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-14 01:45:34.276873 | orchestrator | changed: [testbed-manager] 2025-05-14 01:45:34.276954 | orchestrator | 2025-05-14 01:45:34.276965 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-14 01:46:50.773839 | orchestrator | changed: [testbed-manager] 2025-05-14 01:46:50.773923 | orchestrator | 2025-05-14 01:46:50.773931 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-14 01:47:15.272153 | orchestrator | changed: [testbed-manager] 2025-05-14 01:47:15.272255 | orchestrator | 2025-05-14 01:47:15.272269 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-14 01:47:24.368265 | orchestrator | changed: [testbed-manager] 2025-05-14 01:47:24.368331 | orchestrator | 2025-05-14 01:47:24.368339 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-14 01:47:24.421420 | orchestrator | ok: [testbed-manager] 2025-05-14 01:47:24.421496 | orchestrator | 2025-05-14 01:47:24.421512 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-14 01:47:25.242414 | orchestrator | ok: [testbed-manager] 2025-05-14 01:47:25.242477 | orchestrator | 2025-05-14 01:47:25.242495 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-14 01:47:25.989938 | orchestrator | changed: [testbed-manager] 2025-05-14 01:47:25.990043 | orchestrator | 2025-05-14 01:47:25.990062 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-14 01:47:32.509543 | orchestrator | changed: [testbed-manager] 2025-05-14 01:47:32.509633 | orchestrator | 2025-05-14 01:47:32.509678 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-14 01:47:38.672171 | orchestrator | changed: [testbed-manager] 2025-05-14 01:47:38.672262 | orchestrator | 2025-05-14 01:47:38.672273 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-14 01:47:41.328035 | orchestrator | changed: [testbed-manager] 2025-05-14 01:47:41.328155 | orchestrator | 2025-05-14 01:47:41.328172 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-14 01:47:43.124309 | orchestrator | changed: [testbed-manager] 2025-05-14 01:47:43.124624 | orchestrator | 2025-05-14 01:47:43.124647 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-14 01:47:44.285479 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-14 01:47:44.285609 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-14 01:47:44.285627 | orchestrator | 2025-05-14 01:47:44.285641 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-14 01:47:44.328547 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-14 01:47:44.328644 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-14 01:47:44.328657 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-14 01:47:44.328670 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-14 01:47:48.239664 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-14 01:47:48.239760 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-14 01:47:48.239775 | orchestrator | 2025-05-14 01:47:48.239788 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-14 01:47:48.826710 | orchestrator | changed: [testbed-manager] 2025-05-14 01:47:48.826808 | orchestrator | 2025-05-14 01:47:48.826825 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-14 01:49:10.627439 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-14 01:49:10.627536 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-14 01:49:10.627548 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-14 01:49:10.627557 | orchestrator | 2025-05-14 01:49:10.627565 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-14 01:49:13.000093 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-14 01:49:13.000146 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-14 01:49:13.000153 | orchestrator | 2025-05-14 01:49:13.000159 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-14 01:49:13.000164 | orchestrator | 2025-05-14 01:49:13.000168 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:49:14.391779 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:14.391872 | orchestrator | 2025-05-14 01:49:14.391890 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-14 01:49:14.445817 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:14.445855 | orchestrator | 2025-05-14 01:49:14.445864 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-14 01:49:14.512017 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:14.512083 | orchestrator | 2025-05-14 01:49:14.512092 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-14 01:49:15.337030 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:15.337147 | orchestrator | 2025-05-14 01:49:15.337162 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-14 01:49:16.086724 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:16.086815 | orchestrator | 2025-05-14 01:49:16.086833 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-14 01:49:17.535065 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-14 01:49:17.535353 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-14 01:49:17.535371 | orchestrator | 2025-05-14 01:49:17.535401 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-14 01:49:18.972854 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:18.972905 | orchestrator | 2025-05-14 01:49:18.972913 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-14 01:49:20.755599 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 01:49:20.755688 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-14 01:49:20.755702 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-14 01:49:20.755715 | orchestrator | 2025-05-14 01:49:20.755728 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-14 01:49:21.334324 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:21.334416 | orchestrator | 2025-05-14 01:49:21.334434 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-14 01:49:21.402875 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:21.402981 | orchestrator | 2025-05-14 01:49:21.403000 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-14 01:49:22.256944 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:49:22.257466 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:22.257491 | orchestrator | 2025-05-14 01:49:22.257504 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-14 01:49:22.292415 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:22.292489 | orchestrator | 2025-05-14 01:49:22.292507 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-14 01:49:22.326534 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:22.326618 | orchestrator | 2025-05-14 01:49:22.326635 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-14 01:49:22.358816 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:22.358902 | orchestrator | 2025-05-14 01:49:22.358920 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-14 01:49:22.401903 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:22.401976 | orchestrator | 2025-05-14 01:49:22.401989 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-14 01:49:23.133821 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:23.133867 | orchestrator | 2025-05-14 01:49:23.133873 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-14 01:49:23.133878 | orchestrator | 2025-05-14 01:49:23.133883 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:49:24.552127 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:24.552216 | orchestrator | 2025-05-14 01:49:24.552234 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-14 01:49:25.561934 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:25.562127 | orchestrator | 2025-05-14 01:49:25.562152 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 01:49:25.562167 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-14 01:49:25.562178 | orchestrator | 2025-05-14 01:49:25.749276 | orchestrator | ok: Runtime: 0:07:03.908455 2025-05-14 01:49:25.769786 | 2025-05-14 01:49:25.769950 | TASK [Point out that the log in on the manager is now possible] 2025-05-14 01:49:25.806128 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-14 01:49:25.815521 | 2025-05-14 01:49:25.815664 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-14 01:49:25.852144 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-14 01:49:25.861751 | 2025-05-14 01:49:25.861920 | TASK [Run manager part 1 + 2] 2025-05-14 01:49:26.744700 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 01:49:26.802939 | orchestrator | 2025-05-14 01:49:26.802993 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-14 01:49:26.803001 | orchestrator | 2025-05-14 01:49:26.803013 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:49:29.825645 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:29.825697 | orchestrator | 2025-05-14 01:49:29.825716 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-14 01:49:29.862873 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:29.862923 | orchestrator | 2025-05-14 01:49:29.862933 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-14 01:49:29.901773 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:29.901826 | orchestrator | 2025-05-14 01:49:29.901837 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-14 01:49:29.939032 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:29.939142 | orchestrator | 2025-05-14 01:49:29.939153 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-14 01:49:30.003613 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:30.003672 | orchestrator | 2025-05-14 01:49:30.003684 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-14 01:49:30.061180 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:30.061232 | orchestrator | 2025-05-14 01:49:30.061241 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-14 01:49:30.104537 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-14 01:49:30.104580 | orchestrator | 2025-05-14 01:49:30.104586 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-14 01:49:30.788892 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:30.788950 | orchestrator | 2025-05-14 01:49:30.788961 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-14 01:49:30.837121 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:30.837174 | orchestrator | 2025-05-14 01:49:30.837182 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-14 01:49:32.187988 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:32.188048 | orchestrator | 2025-05-14 01:49:32.188086 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-14 01:49:32.737028 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:32.737131 | orchestrator | 2025-05-14 01:49:32.737142 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-14 01:49:33.895200 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:33.895279 | orchestrator | 2025-05-14 01:49:33.895296 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-14 01:49:47.088631 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:47.088897 | orchestrator | 2025-05-14 01:49:47.088917 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-14 01:49:47.775408 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:47.775497 | orchestrator | 2025-05-14 01:49:47.775517 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-14 01:49:47.827172 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:47.827250 | orchestrator | 2025-05-14 01:49:47.827265 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-14 01:49:48.849984 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:48.850131 | orchestrator | 2025-05-14 01:49:48.850148 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-14 01:49:49.811222 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:49.811322 | orchestrator | 2025-05-14 01:49:49.811342 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-14 01:49:50.337601 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:50.337704 | orchestrator | 2025-05-14 01:49:50.337720 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-14 01:49:50.378842 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-14 01:49:50.378942 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-14 01:49:50.378957 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-14 01:49:50.378970 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-14 01:49:52.339901 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:52.339968 | orchestrator | 2025-05-14 01:49:52.339977 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-14 01:50:01.409695 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-14 01:50:01.409752 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-14 01:50:01.409760 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-14 01:50:01.409768 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-14 01:50:01.409775 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-14 01:50:01.409781 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-14 01:50:01.409788 | orchestrator | 2025-05-14 01:50:01.409794 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-14 01:50:02.525570 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:02.525625 | orchestrator | 2025-05-14 01:50:02.525632 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-14 01:50:02.575395 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:50:02.575448 | orchestrator | 2025-05-14 01:50:02.575457 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-14 01:50:05.761714 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:05.761841 | orchestrator | 2025-05-14 01:50:05.761861 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-14 01:50:05.800493 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:50:05.800605 | orchestrator | 2025-05-14 01:50:05.800622 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-14 01:51:41.327889 | orchestrator | changed: [testbed-manager] 2025-05-14 01:51:41.327946 | orchestrator | 2025-05-14 01:51:41.327955 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-14 01:51:42.525456 | orchestrator | ok: [testbed-manager] 2025-05-14 01:51:42.525510 | orchestrator | 2025-05-14 01:51:42.525517 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 01:51:42.525525 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-14 01:51:42.525532 | orchestrator | 2025-05-14 01:51:43.000926 | orchestrator | ok: Runtime: 0:02:16.460745 2025-05-14 01:51:43.022689 | 2025-05-14 01:51:43.022913 | TASK [Reboot manager] 2025-05-14 01:51:44.566077 | orchestrator | ok: Runtime: 0:00:00.971301 2025-05-14 01:51:44.583285 | 2025-05-14 01:51:44.583461 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-14 01:51:59.095544 | orchestrator | ok 2025-05-14 01:51:59.112519 | 2025-05-14 01:51:59.112703 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-14 01:52:59.166894 | orchestrator | ok 2025-05-14 01:52:59.176915 | 2025-05-14 01:52:59.177050 | TASK [Deploy manager + bootstrap nodes] 2025-05-14 01:53:01.784627 | orchestrator | 2025-05-14 01:53:01.784828 | orchestrator | # DEPLOY MANAGER 2025-05-14 01:53:01.784851 | orchestrator | 2025-05-14 01:53:01.784864 | orchestrator | + set -e 2025-05-14 01:53:01.784877 | orchestrator | + echo 2025-05-14 01:53:01.784891 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-14 01:53:01.784904 | orchestrator | + echo 2025-05-14 01:53:01.784955 | orchestrator | + cat /opt/manager-vars.sh 2025-05-14 01:53:01.788089 | orchestrator | export NUMBER_OF_NODES=6 2025-05-14 01:53:01.788118 | orchestrator | 2025-05-14 01:53:01.788131 | orchestrator | export CEPH_VERSION=reef 2025-05-14 01:53:01.788143 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-14 01:53:01.788156 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-14 01:53:01.788179 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-14 01:53:01.788190 | orchestrator | 2025-05-14 01:53:01.788208 | orchestrator | export ARA=false 2025-05-14 01:53:01.788219 | orchestrator | export TEMPEST=false 2025-05-14 01:53:01.788237 | orchestrator | export IS_ZUUL=true 2025-05-14 01:53:01.788249 | orchestrator | 2025-05-14 01:53:01.788266 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-05-14 01:53:01.788279 | orchestrator | export EXTERNAL_API=false 2025-05-14 01:53:01.788290 | orchestrator | 2025-05-14 01:53:01.788311 | orchestrator | export IMAGE_USER=ubuntu 2025-05-14 01:53:01.788322 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-14 01:53:01.788333 | orchestrator | 2025-05-14 01:53:01.788348 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-14 01:53:01.788651 | orchestrator | 2025-05-14 01:53:01.788673 | orchestrator | + echo 2025-05-14 01:53:01.788685 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 01:53:01.789647 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 01:53:01.789668 | orchestrator | ++ INTERACTIVE=false 2025-05-14 01:53:01.789709 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 01:53:01.789722 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 01:53:01.789753 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 01:53:01.789791 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 01:53:01.789804 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 01:53:01.789840 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 01:53:01.789864 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 01:53:01.789880 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 01:53:01.789891 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 01:53:01.789902 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-14 01:53:01.789943 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-14 01:53:01.789968 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 01:53:01.790003 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 01:53:01.790084 | orchestrator | ++ export ARA=false 2025-05-14 01:53:01.790098 | orchestrator | ++ ARA=false 2025-05-14 01:53:01.790119 | orchestrator | ++ export TEMPEST=false 2025-05-14 01:53:01.790130 | orchestrator | ++ TEMPEST=false 2025-05-14 01:53:01.790141 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 01:53:01.790152 | orchestrator | ++ IS_ZUUL=true 2025-05-14 01:53:01.790163 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-05-14 01:53:01.790174 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-05-14 01:53:01.790190 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 01:53:01.790201 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 01:53:01.790212 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 01:53:01.790223 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 01:53:01.790233 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 01:53:01.790244 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 01:53:01.790255 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 01:53:01.790266 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 01:53:01.790277 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-14 01:53:01.845494 | orchestrator | + docker version 2025-05-14 01:53:02.105778 | orchestrator | Client: Docker Engine - Community 2025-05-14 01:53:02.105881 | orchestrator | Version: 26.1.4 2025-05-14 01:53:02.105900 | orchestrator | API version: 1.45 2025-05-14 01:53:02.105912 | orchestrator | Go version: go1.21.11 2025-05-14 01:53:02.105923 | orchestrator | Git commit: 5650f9b 2025-05-14 01:53:02.105934 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-14 01:53:02.105946 | orchestrator | OS/Arch: linux/amd64 2025-05-14 01:53:02.105958 | orchestrator | Context: default 2025-05-14 01:53:02.106013 | orchestrator | 2025-05-14 01:53:02.106069 | orchestrator | Server: Docker Engine - Community 2025-05-14 01:53:02.106080 | orchestrator | Engine: 2025-05-14 01:53:02.106092 | orchestrator | Version: 26.1.4 2025-05-14 01:53:02.106103 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-14 01:53:02.106114 | orchestrator | Go version: go1.21.11 2025-05-14 01:53:02.106125 | orchestrator | Git commit: de5c9cf 2025-05-14 01:53:02.106163 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-14 01:53:02.106174 | orchestrator | OS/Arch: linux/amd64 2025-05-14 01:53:02.106185 | orchestrator | Experimental: false 2025-05-14 01:53:02.106196 | orchestrator | containerd: 2025-05-14 01:53:02.106221 | orchestrator | Version: 1.7.27 2025-05-14 01:53:02.106232 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-14 01:53:02.106244 | orchestrator | runc: 2025-05-14 01:53:02.106255 | orchestrator | Version: 1.2.5 2025-05-14 01:53:02.106266 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-14 01:53:02.106276 | orchestrator | docker-init: 2025-05-14 01:53:02.106288 | orchestrator | Version: 0.19.0 2025-05-14 01:53:02.106298 | orchestrator | GitCommit: de40ad0 2025-05-14 01:53:02.108082 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-14 01:53:02.117391 | orchestrator | + set -e 2025-05-14 01:53:02.117446 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 01:53:02.117746 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 01:53:02.117775 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 01:53:02.117794 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 01:53:02.117814 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 01:53:02.117833 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 01:53:02.117847 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 01:53:02.117858 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-14 01:53:02.117868 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-14 01:53:02.117879 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 01:53:02.117890 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 01:53:02.117901 | orchestrator | ++ export ARA=false 2025-05-14 01:53:02.117912 | orchestrator | ++ ARA=false 2025-05-14 01:53:02.117923 | orchestrator | ++ export TEMPEST=false 2025-05-14 01:53:02.117933 | orchestrator | ++ TEMPEST=false 2025-05-14 01:53:02.117944 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 01:53:02.117955 | orchestrator | ++ IS_ZUUL=true 2025-05-14 01:53:02.117966 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-05-14 01:53:02.117977 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-05-14 01:53:02.117988 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 01:53:02.117999 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 01:53:02.118058 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 01:53:02.118070 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 01:53:02.118081 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 01:53:02.118092 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 01:53:02.118139 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 01:53:02.118150 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 01:53:02.118161 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 01:53:02.118172 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 01:53:02.118183 | orchestrator | ++ INTERACTIVE=false 2025-05-14 01:53:02.118193 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 01:53:02.118205 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 01:53:02.118219 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 01:53:02.118231 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-14 01:53:02.125966 | orchestrator | + set -e 2025-05-14 01:53:02.126601 | orchestrator | + VERSION=8.1.0 2025-05-14 01:53:02.126627 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-14 01:53:02.133716 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 01:53:02.133744 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-14 01:53:02.139244 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-14 01:53:02.143907 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-14 01:53:02.152699 | orchestrator | /opt/configuration ~ 2025-05-14 01:53:02.152791 | orchestrator | + set -e 2025-05-14 01:53:02.152807 | orchestrator | + pushd /opt/configuration 2025-05-14 01:53:02.152819 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 01:53:02.155278 | orchestrator | + source /opt/venv/bin/activate 2025-05-14 01:53:02.156426 | orchestrator | ++ deactivate nondestructive 2025-05-14 01:53:02.156460 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:02.156472 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:02.156483 | orchestrator | ++ hash -r 2025-05-14 01:53:02.156499 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:02.156510 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-14 01:53:02.156521 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-14 01:53:02.156533 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-14 01:53:02.156683 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-14 01:53:02.156699 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-14 01:53:02.156716 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-14 01:53:02.156728 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-14 01:53:02.156743 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 01:53:02.156924 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 01:53:02.156942 | orchestrator | ++ export PATH 2025-05-14 01:53:02.156953 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:02.156968 | orchestrator | ++ '[' -z '' ']' 2025-05-14 01:53:02.157002 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-14 01:53:02.157077 | orchestrator | ++ PS1='(venv) ' 2025-05-14 01:53:02.157090 | orchestrator | ++ export PS1 2025-05-14 01:53:02.157101 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-14 01:53:02.157116 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-14 01:53:02.157293 | orchestrator | ++ hash -r 2025-05-14 01:53:02.157529 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-14 01:53:03.305613 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-14 01:53:03.308428 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-14 01:53:03.308476 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-14 01:53:03.310122 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-14 01:53:03.311546 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-14 01:53:03.321736 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.0) 2025-05-14 01:53:03.323265 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-14 01:53:03.324520 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-14 01:53:03.325793 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-14 01:53:03.363515 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-14 01:53:03.366664 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-14 01:53:03.368976 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-14 01:53:03.371323 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-14 01:53:03.376974 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-14 01:53:03.620961 | orchestrator | ++ which gilt 2025-05-14 01:53:03.624668 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-14 01:53:03.624713 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-14 01:53:03.839842 | orchestrator | osism.cfg-generics: 2025-05-14 01:53:03.839939 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-14 01:53:05.440807 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-14 01:53:05.440973 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-14 01:53:05.441018 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-14 01:53:05.441036 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-14 01:53:06.379612 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-14 01:53:06.391599 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-14 01:53:06.723510 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-14 01:53:06.773483 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 01:53:06.773586 | orchestrator | + deactivate 2025-05-14 01:53:06.773603 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-14 01:53:06.773617 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 01:53:06.773640 | orchestrator | + export PATH 2025-05-14 01:53:06.773651 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-14 01:53:06.773663 | orchestrator | + '[' -n '' ']' 2025-05-14 01:53:06.773674 | orchestrator | + hash -r 2025-05-14 01:53:06.773685 | orchestrator | ~ 2025-05-14 01:53:06.773696 | orchestrator | + '[' -n '' ']' 2025-05-14 01:53:06.773707 | orchestrator | + unset VIRTUAL_ENV 2025-05-14 01:53:06.773717 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-14 01:53:06.773728 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-14 01:53:06.773739 | orchestrator | + unset -f deactivate 2025-05-14 01:53:06.773750 | orchestrator | + popd 2025-05-14 01:53:06.775336 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-14 01:53:06.775387 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-14 01:53:06.776260 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-14 01:53:06.830305 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-14 01:53:06.830355 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-14 01:53:06.830409 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-14 01:53:06.876850 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 01:53:06.876944 | orchestrator | + source /opt/venv/bin/activate 2025-05-14 01:53:06.876962 | orchestrator | ++ deactivate nondestructive 2025-05-14 01:53:06.876975 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:06.877036 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:06.877050 | orchestrator | ++ hash -r 2025-05-14 01:53:06.877061 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:06.877072 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-14 01:53:06.877084 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-14 01:53:06.877103 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-14 01:53:06.877115 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-14 01:53:06.877126 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-14 01:53:06.877141 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-14 01:53:06.877152 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-14 01:53:06.877164 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 01:53:06.877346 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 01:53:06.877396 | orchestrator | ++ export PATH 2025-05-14 01:53:06.877439 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:06.877545 | orchestrator | ++ '[' -z '' ']' 2025-05-14 01:53:06.877560 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-14 01:53:06.877578 | orchestrator | ++ PS1='(venv) ' 2025-05-14 01:53:06.877595 | orchestrator | ++ export PS1 2025-05-14 01:53:06.877607 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-14 01:53:06.877618 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-14 01:53:06.877628 | orchestrator | ++ hash -r 2025-05-14 01:53:06.877809 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-14 01:53:08.194079 | orchestrator | 2025-05-14 01:53:08.194247 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-14 01:53:08.194265 | orchestrator | 2025-05-14 01:53:08.194278 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-14 01:53:08.762612 | orchestrator | ok: [testbed-manager] 2025-05-14 01:53:08.762738 | orchestrator | 2025-05-14 01:53:08.762755 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-14 01:53:09.768028 | orchestrator | changed: [testbed-manager] 2025-05-14 01:53:09.768156 | orchestrator | 2025-05-14 01:53:09.768170 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-14 01:53:09.768181 | orchestrator | 2025-05-14 01:53:09.768190 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:53:13.185825 | orchestrator | ok: [testbed-manager] 2025-05-14 01:53:13.185955 | orchestrator | 2025-05-14 01:53:13.185983 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-14 01:53:17.878863 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-14 01:53:17.878979 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-14 01:53:17.878995 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-14 01:53:17.879007 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-14 01:53:17.879018 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-14 01:53:17.879035 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-14 01:53:17.879046 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-14 01:53:17.879060 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-14 01:53:17.879071 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-14 01:53:17.879082 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-14 01:53:17.879093 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-14 01:53:17.879104 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-14 01:53:17.879115 | orchestrator | 2025-05-14 01:53:17.879127 | orchestrator | TASK [Check status] ************************************************************ 2025-05-14 01:54:33.840581 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-14 01:54:33.840730 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-14 01:54:33.840746 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-14 01:54:33.840757 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-14 01:54:33.840783 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j885755277852.1583', 'results_file': '/home/dragon/.ansible_async/j885755277852.1583', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840805 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j196627163866.1608', 'results_file': '/home/dragon/.ansible_async/j196627163866.1608', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840822 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-14 01:54:33.840833 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-14 01:54:33.840845 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j40533248327.1633', 'results_file': '/home/dragon/.ansible_async/j40533248327.1633', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840857 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j762694229341.1665', 'results_file': '/home/dragon/.ansible_async/j762694229341.1665', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840869 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j827451043080.1701', 'results_file': '/home/dragon/.ansible_async/j827451043080.1701', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840880 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j336109751016.1733', 'results_file': '/home/dragon/.ansible_async/j336109751016.1733', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840891 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-14 01:54:33.840941 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j550952194885.1765', 'results_file': '/home/dragon/.ansible_async/j550952194885.1765', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840954 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j777552033500.1798', 'results_file': '/home/dragon/.ansible_async/j777552033500.1798', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840965 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j198812735086.1836', 'results_file': '/home/dragon/.ansible_async/j198812735086.1836', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840977 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j280234102734.1861', 'results_file': '/home/dragon/.ansible_async/j280234102734.1861', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840988 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j427209283255.1900', 'results_file': '/home/dragon/.ansible_async/j427209283255.1900', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.840999 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j675790054647.1925', 'results_file': '/home/dragon/.ansible_async/j675790054647.1925', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-14 01:54:33.841010 | orchestrator | 2025-05-14 01:54:33.841023 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-14 01:54:33.879312 | orchestrator | ok: [testbed-manager] 2025-05-14 01:54:33.879413 | orchestrator | 2025-05-14 01:54:33.879430 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-14 01:54:34.357547 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:34.357659 | orchestrator | 2025-05-14 01:54:34.357674 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-14 01:54:34.704179 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:34.704299 | orchestrator | 2025-05-14 01:54:34.704315 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-14 01:54:35.051715 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:35.051846 | orchestrator | 2025-05-14 01:54:35.051861 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-14 01:54:35.102772 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:54:35.102846 | orchestrator | 2025-05-14 01:54:35.102860 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-14 01:54:35.453750 | orchestrator | ok: [testbed-manager] 2025-05-14 01:54:35.453855 | orchestrator | 2025-05-14 01:54:35.453868 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-14 01:54:35.583261 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:54:35.583372 | orchestrator | 2025-05-14 01:54:35.583386 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-14 01:54:35.583398 | orchestrator | 2025-05-14 01:54:35.583409 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:54:37.495459 | orchestrator | ok: [testbed-manager] 2025-05-14 01:54:37.495633 | orchestrator | 2025-05-14 01:54:37.495648 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-14 01:54:37.639882 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-14 01:54:37.640005 | orchestrator | 2025-05-14 01:54:37.640018 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-14 01:54:37.719442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-14 01:54:37.719613 | orchestrator | 2025-05-14 01:54:37.719628 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-14 01:54:38.890468 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-14 01:54:38.891372 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-14 01:54:38.891406 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-14 01:54:38.891418 | orchestrator | 2025-05-14 01:54:38.891431 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-14 01:54:40.694501 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-14 01:54:40.694664 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-14 01:54:40.694680 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-14 01:54:40.694692 | orchestrator | 2025-05-14 01:54:40.694705 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-14 01:54:41.322453 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:54:41.322612 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:41.322629 | orchestrator | 2025-05-14 01:54:41.322672 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-14 01:54:41.975166 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:54:41.975291 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:41.975307 | orchestrator | 2025-05-14 01:54:41.975320 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-14 01:54:42.040748 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:54:42.040806 | orchestrator | 2025-05-14 01:54:42.040823 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-14 01:54:42.397744 | orchestrator | ok: [testbed-manager] 2025-05-14 01:54:42.397864 | orchestrator | 2025-05-14 01:54:42.397879 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-14 01:54:42.459086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-14 01:54:42.459166 | orchestrator | 2025-05-14 01:54:42.459181 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-14 01:54:43.446392 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:43.446571 | orchestrator | 2025-05-14 01:54:43.446592 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-14 01:54:44.372581 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:44.373205 | orchestrator | 2025-05-14 01:54:44.373243 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-14 01:54:47.641767 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:47.641873 | orchestrator | 2025-05-14 01:54:47.641889 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-14 01:54:47.760176 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-14 01:54:47.760275 | orchestrator | 2025-05-14 01:54:47.760292 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-14 01:54:47.841656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 01:54:47.841735 | orchestrator | 2025-05-14 01:54:47.841749 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-14 01:54:50.333766 | orchestrator | ok: [testbed-manager] 2025-05-14 01:54:50.333879 | orchestrator | 2025-05-14 01:54:50.333895 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-14 01:54:50.436338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-14 01:54:50.436431 | orchestrator | 2025-05-14 01:54:50.436445 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-14 01:54:51.513289 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-14 01:54:51.513396 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-14 01:54:51.513411 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-14 01:54:51.513450 | orchestrator | 2025-05-14 01:54:51.513463 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-14 01:54:51.583465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-14 01:54:51.583595 | orchestrator | 2025-05-14 01:54:51.583613 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-14 01:54:52.198870 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-14 01:54:52.198947 | orchestrator | 2025-05-14 01:54:52.198955 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-14 01:54:52.796020 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:52.796124 | orchestrator | 2025-05-14 01:54:52.796140 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-14 01:54:53.447187 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:54:53.447279 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:53.447293 | orchestrator | 2025-05-14 01:54:53.447305 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-14 01:54:53.890757 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:53.890850 | orchestrator | 2025-05-14 01:54:53.890865 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-14 01:54:54.239566 | orchestrator | ok: [testbed-manager] 2025-05-14 01:54:54.239672 | orchestrator | 2025-05-14 01:54:54.239779 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-14 01:54:54.274819 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:54:54.274908 | orchestrator | 2025-05-14 01:54:54.274922 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-14 01:54:54.915600 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:54.915703 | orchestrator | 2025-05-14 01:54:54.915719 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-14 01:54:54.989273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-14 01:54:54.989365 | orchestrator | 2025-05-14 01:54:54.989379 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-14 01:54:55.736767 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-14 01:54:55.736870 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-14 01:54:55.736884 | orchestrator | 2025-05-14 01:54:55.736897 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-14 01:54:56.380391 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-14 01:54:56.380488 | orchestrator | 2025-05-14 01:54:56.380503 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-14 01:54:57.042506 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:57.042663 | orchestrator | 2025-05-14 01:54:57.042680 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-14 01:54:57.082510 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:54:57.082606 | orchestrator | 2025-05-14 01:54:57.082627 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-14 01:54:57.741826 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:57.741927 | orchestrator | 2025-05-14 01:54:57.741943 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-14 01:54:59.615969 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:54:59.616069 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:54:59.616083 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:54:59.616095 | orchestrator | changed: [testbed-manager] 2025-05-14 01:54:59.616107 | orchestrator | 2025-05-14 01:54:59.616119 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-14 01:55:05.908848 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-14 01:55:05.908959 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-14 01:55:05.908976 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-14 01:55:05.908988 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-14 01:55:05.909030 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-14 01:55:05.909042 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-14 01:55:05.909053 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-14 01:55:05.909083 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-14 01:55:05.909096 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-14 01:55:05.909107 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-14 01:55:05.909118 | orchestrator | 2025-05-14 01:55:05.909131 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-14 01:55:06.575745 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-14 01:55:06.575838 | orchestrator | 2025-05-14 01:55:06.575848 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-14 01:55:06.662272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-14 01:55:06.662365 | orchestrator | 2025-05-14 01:55:06.662379 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-14 01:55:07.407305 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:07.407408 | orchestrator | 2025-05-14 01:55:07.407422 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-14 01:55:08.038122 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:08.038219 | orchestrator | 2025-05-14 01:55:08.038233 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-14 01:55:08.785650 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:08.785739 | orchestrator | 2025-05-14 01:55:08.785753 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-14 01:55:11.021769 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:11.021880 | orchestrator | 2025-05-14 01:55:11.021898 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-14 01:55:12.032148 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:12.032249 | orchestrator | 2025-05-14 01:55:12.032265 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-14 01:55:34.251031 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-14 01:55:34.251199 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:34.251218 | orchestrator | 2025-05-14 01:55:34.251232 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-14 01:55:34.314011 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:34.314204 | orchestrator | 2025-05-14 01:55:34.314220 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-14 01:55:34.314233 | orchestrator | 2025-05-14 01:55:34.314245 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-14 01:55:34.359572 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:34.359635 | orchestrator | 2025-05-14 01:55:34.359648 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-14 01:55:34.433226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-14 01:55:34.433350 | orchestrator | 2025-05-14 01:55:34.433366 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-14 01:55:35.247774 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:35.247900 | orchestrator | 2025-05-14 01:55:35.247926 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-14 01:55:35.321020 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:35.321122 | orchestrator | 2025-05-14 01:55:35.321135 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-14 01:55:35.370797 | orchestrator | ok: [testbed-manager] => { 2025-05-14 01:55:35.370865 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-14 01:55:35.370879 | orchestrator | } 2025-05-14 01:55:35.370891 | orchestrator | 2025-05-14 01:55:35.370903 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-14 01:55:36.021809 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:36.022102 | orchestrator | 2025-05-14 01:55:36.022124 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-14 01:55:36.890574 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:36.890763 | orchestrator | 2025-05-14 01:55:36.890779 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-14 01:55:36.964472 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:36.964596 | orchestrator | 2025-05-14 01:55:36.964636 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-14 01:55:37.010942 | orchestrator | ok: [testbed-manager] => { 2025-05-14 01:55:37.011055 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-14 01:55:37.011070 | orchestrator | } 2025-05-14 01:55:37.011082 | orchestrator | 2025-05-14 01:55:37.011094 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-14 01:55:37.076629 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:37.076693 | orchestrator | 2025-05-14 01:55:37.076706 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-14 01:55:37.120002 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:37.120065 | orchestrator | 2025-05-14 01:55:37.120081 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-14 01:55:37.175741 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:37.175804 | orchestrator | 2025-05-14 01:55:37.175817 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-14 01:55:37.218824 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:37.218908 | orchestrator | 2025-05-14 01:55:37.218922 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-14 01:55:37.261964 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:37.262083 | orchestrator | 2025-05-14 01:55:37.262100 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-14 01:55:37.357695 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:37.357775 | orchestrator | 2025-05-14 01:55:37.357794 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-14 01:55:38.765199 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:38.765294 | orchestrator | 2025-05-14 01:55:38.765301 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-14 01:55:38.831100 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:38.831179 | orchestrator | 2025-05-14 01:55:38.831189 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-14 01:56:38.893577 | orchestrator | Pausing for 60 seconds 2025-05-14 01:56:38.893763 | orchestrator | changed: [testbed-manager] 2025-05-14 01:56:38.893785 | orchestrator | 2025-05-14 01:56:38.893799 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-14 01:56:38.948100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-14 01:56:38.948175 | orchestrator | 2025-05-14 01:56:38.948190 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-14 02:00:51.025782 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-14 02:00:51.025914 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-14 02:00:51.025932 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-14 02:00:51.025944 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-14 02:00:51.025955 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-14 02:00:51.025966 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-14 02:00:51.025977 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-14 02:00:51.025987 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-14 02:00:51.025998 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-14 02:00:51.026165 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-14 02:00:51.026185 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-14 02:00:51.026196 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-14 02:00:51.026206 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-14 02:00:51.026217 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-14 02:00:51.026228 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-14 02:00:51.026242 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-14 02:00:51.026253 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-14 02:00:51.026263 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-14 02:00:51.026274 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-14 02:00:51.026284 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-14 02:00:51.026295 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-14 02:00:51.026306 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-14 02:00:51.026319 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-14 02:00:51.026332 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-05-14 02:00:51.026346 | orchestrator | changed: [testbed-manager] 2025-05-14 02:00:51.026360 | orchestrator | 2025-05-14 02:00:51.026373 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-14 02:00:51.026387 | orchestrator | 2025-05-14 02:00:51.026400 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 02:00:54.084943 | orchestrator | ok: [testbed-manager] 2025-05-14 02:00:54.085124 | orchestrator | 2025-05-14 02:00:54.085143 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-14 02:00:54.201686 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-14 02:00:54.201789 | orchestrator | 2025-05-14 02:00:54.201803 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-14 02:00:54.262596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 02:00:54.262661 | orchestrator | 2025-05-14 02:00:54.262674 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-14 02:00:56.256300 | orchestrator | ok: [testbed-manager] 2025-05-14 02:00:56.256406 | orchestrator | 2025-05-14 02:00:56.256423 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-14 02:00:56.316565 | orchestrator | ok: [testbed-manager] 2025-05-14 02:00:56.316644 | orchestrator | 2025-05-14 02:00:56.316657 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-14 02:00:56.429933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-14 02:00:56.430106 | orchestrator | 2025-05-14 02:00:56.430128 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-14 02:00:59.457193 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-14 02:00:59.457334 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-14 02:00:59.457351 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-14 02:00:59.457364 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-14 02:00:59.457449 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-14 02:00:59.457464 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-14 02:00:59.457475 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-14 02:00:59.457486 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-14 02:00:59.457498 | orchestrator | 2025-05-14 02:00:59.457510 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-14 02:01:00.234789 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:00.234922 | orchestrator | 2025-05-14 02:01:00.234939 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-14 02:01:00.319795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-14 02:01:00.319896 | orchestrator | 2025-05-14 02:01:00.319913 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-14 02:01:01.597367 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-14 02:01:01.597502 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-14 02:01:01.597518 | orchestrator | 2025-05-14 02:01:01.597531 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-14 02:01:02.390428 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:02.390551 | orchestrator | 2025-05-14 02:01:02.390567 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-14 02:01:02.455794 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:01:02.455913 | orchestrator | 2025-05-14 02:01:02.455928 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-14 02:01:02.526886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-14 02:01:02.526988 | orchestrator | 2025-05-14 02:01:02.527031 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-14 02:01:03.864577 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 02:01:03.864728 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 02:01:03.864752 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:03.864768 | orchestrator | 2025-05-14 02:01:03.864783 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-14 02:01:04.439908 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:04.440039 | orchestrator | 2025-05-14 02:01:04.440083 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-14 02:01:04.513225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-14 02:01:04.513359 | orchestrator | 2025-05-14 02:01:04.513375 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-14 02:01:05.687967 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 02:01:05.688136 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 02:01:05.688152 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:05.688165 | orchestrator | 2025-05-14 02:01:05.688178 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-14 02:01:06.295414 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:06.295550 | orchestrator | 2025-05-14 02:01:06.295565 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-14 02:01:06.397599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-14 02:01:06.397714 | orchestrator | 2025-05-14 02:01:06.397731 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-14 02:01:06.954770 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:06.954876 | orchestrator | 2025-05-14 02:01:06.954891 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-14 02:01:07.338256 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:07.338363 | orchestrator | 2025-05-14 02:01:07.338380 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-14 02:01:08.623120 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-14 02:01:08.623243 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-14 02:01:08.623256 | orchestrator | 2025-05-14 02:01:08.623267 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-14 02:01:09.455189 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:09.455291 | orchestrator | 2025-05-14 02:01:09.455306 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-14 02:01:09.871512 | orchestrator | ok: [testbed-manager] 2025-05-14 02:01:09.871621 | orchestrator | 2025-05-14 02:01:09.871637 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-14 02:01:10.242763 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:10.242865 | orchestrator | 2025-05-14 02:01:10.242881 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-14 02:01:10.294608 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:01:10.294696 | orchestrator | 2025-05-14 02:01:10.294713 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-14 02:01:10.384831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-14 02:01:10.384910 | orchestrator | 2025-05-14 02:01:10.384920 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-14 02:01:10.437494 | orchestrator | ok: [testbed-manager] 2025-05-14 02:01:10.437584 | orchestrator | 2025-05-14 02:01:10.437599 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-14 02:01:12.607257 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-14 02:01:12.607371 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-14 02:01:12.607388 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-14 02:01:12.607400 | orchestrator | 2025-05-14 02:01:12.607412 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-14 02:01:13.383972 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:13.384117 | orchestrator | 2025-05-14 02:01:13.384137 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-14 02:01:14.135597 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:14.135696 | orchestrator | 2025-05-14 02:01:14.135711 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-14 02:01:14.887652 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:14.887771 | orchestrator | 2025-05-14 02:01:14.887786 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-14 02:01:14.976138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-14 02:01:14.976232 | orchestrator | 2025-05-14 02:01:14.976247 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-14 02:01:15.038163 | orchestrator | ok: [testbed-manager] 2025-05-14 02:01:15.038245 | orchestrator | 2025-05-14 02:01:15.038258 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-14 02:01:15.754267 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-14 02:01:15.754369 | orchestrator | 2025-05-14 02:01:15.754384 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-14 02:01:15.843749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-14 02:01:15.843793 | orchestrator | 2025-05-14 02:01:15.843806 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-14 02:01:16.662331 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:16.662437 | orchestrator | 2025-05-14 02:01:16.662453 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-14 02:01:17.326529 | orchestrator | ok: [testbed-manager] 2025-05-14 02:01:17.326638 | orchestrator | 2025-05-14 02:01:17.326657 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-14 02:01:17.379273 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:01:17.379403 | orchestrator | 2025-05-14 02:01:17.379443 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-14 02:01:17.451789 | orchestrator | ok: [testbed-manager] 2025-05-14 02:01:17.451886 | orchestrator | 2025-05-14 02:01:17.451900 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-14 02:01:18.360643 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:18.360756 | orchestrator | 2025-05-14 02:01:18.360773 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-14 02:02:01.045836 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:01.045953 | orchestrator | 2025-05-14 02:02:01.045971 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-14 02:02:01.807351 | orchestrator | ok: [testbed-manager] 2025-05-14 02:02:01.807454 | orchestrator | 2025-05-14 02:02:01.807470 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-14 02:02:04.799893 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:04.799998 | orchestrator | 2025-05-14 02:02:04.800015 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-14 02:02:04.848482 | orchestrator | ok: [testbed-manager] 2025-05-14 02:02:04.848571 | orchestrator | 2025-05-14 02:02:04.848585 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-14 02:02:04.848598 | orchestrator | 2025-05-14 02:02:04.848609 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-14 02:02:04.895850 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:02:04.895921 | orchestrator | 2025-05-14 02:02:04.895934 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-14 02:03:04.949236 | orchestrator | Pausing for 60 seconds 2025-05-14 02:03:04.949361 | orchestrator | changed: [testbed-manager] 2025-05-14 02:03:04.949377 | orchestrator | 2025-05-14 02:03:04.949390 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-14 02:03:10.423774 | orchestrator | changed: [testbed-manager] 2025-05-14 02:03:10.423919 | orchestrator | 2025-05-14 02:03:10.423938 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-14 02:03:52.040451 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-14 02:03:52.040628 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-14 02:03:52.040646 | orchestrator | changed: [testbed-manager] 2025-05-14 02:03:52.040660 | orchestrator | 2025-05-14 02:03:52.040673 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-14 02:03:57.993490 | orchestrator | changed: [testbed-manager] 2025-05-14 02:03:57.993633 | orchestrator | 2025-05-14 02:03:57.993650 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-14 02:03:58.084854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-14 02:03:58.084898 | orchestrator | 2025-05-14 02:03:58.084910 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-14 02:03:58.084922 | orchestrator | 2025-05-14 02:03:58.084933 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-14 02:03:58.143618 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:03:58.143673 | orchestrator | 2025-05-14 02:03:58.143685 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:03:58.143698 | orchestrator | testbed-manager : ok=109 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-14 02:03:58.143710 | orchestrator | 2025-05-14 02:03:58.273618 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 02:03:58.273665 | orchestrator | + deactivate 2025-05-14 02:03:58.273677 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-14 02:03:58.273691 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 02:03:58.273702 | orchestrator | + export PATH 2025-05-14 02:03:58.273713 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-14 02:03:58.273725 | orchestrator | + '[' -n '' ']' 2025-05-14 02:03:58.273736 | orchestrator | + hash -r 2025-05-14 02:03:58.273747 | orchestrator | + '[' -n '' ']' 2025-05-14 02:03:58.273757 | orchestrator | + unset VIRTUAL_ENV 2025-05-14 02:03:58.273768 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-14 02:03:58.273819 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-14 02:03:58.273831 | orchestrator | + unset -f deactivate 2025-05-14 02:03:58.273842 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-14 02:03:58.279198 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-14 02:03:58.279219 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-14 02:03:58.279230 | orchestrator | + local max_attempts=60 2025-05-14 02:03:58.279241 | orchestrator | + local name=ceph-ansible 2025-05-14 02:03:58.279252 | orchestrator | + local attempt_num=1 2025-05-14 02:03:58.280456 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-14 02:03:58.318137 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:03:58.318158 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-14 02:03:58.318169 | orchestrator | + local max_attempts=60 2025-05-14 02:03:58.318180 | orchestrator | + local name=kolla-ansible 2025-05-14 02:03:58.318191 | orchestrator | + local attempt_num=1 2025-05-14 02:03:58.318701 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-14 02:03:58.342750 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:03:58.342773 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-14 02:03:58.342783 | orchestrator | + local max_attempts=60 2025-05-14 02:03:58.342794 | orchestrator | + local name=osism-ansible 2025-05-14 02:03:58.342805 | orchestrator | + local attempt_num=1 2025-05-14 02:03:58.343839 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-14 02:03:58.372576 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:03:58.372620 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-14 02:03:58.372632 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-14 02:03:59.090415 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-14 02:03:59.137543 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-14 02:03:59.137625 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-14 02:03:59.137642 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-14 02:03:59.358143 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-14 02:03:59.358262 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358278 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358370 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-14 02:03:59.358387 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-14 02:03:59.358405 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358416 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358427 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358438 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-14 02:03:59.358449 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358488 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-14 02:03:59.358500 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358510 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358521 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-14 02:03:59.358532 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358542 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358568 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.358580 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-14 02:03:59.364132 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-14 02:03:59.514206 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-14 02:03:59.514388 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-14 02:03:59.514407 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-14 02:03:59.514420 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-05-14 02:03:59.514434 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-05-14 02:03:59.521781 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-14 02:03:59.573439 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-14 02:03:59.573518 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-14 02:03:59.576939 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-14 02:04:01.195752 | orchestrator | 2025-05-14 02:04:01 | INFO  | Task 010053ce-e941-4511-8dd2-750b702299fd (resolvconf) was prepared for execution. 2025-05-14 02:04:01.195851 | orchestrator | 2025-05-14 02:04:01 | INFO  | It takes a moment until task 010053ce-e941-4511-8dd2-750b702299fd (resolvconf) has been started and output is visible here. 2025-05-14 02:04:04.081789 | orchestrator | 2025-05-14 02:04:04.082689 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-14 02:04:04.083067 | orchestrator | 2025-05-14 02:04:04.083950 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 02:04:04.084958 | orchestrator | Wednesday 14 May 2025 02:04:04 +0000 (0:00:00.093) 0:00:00.093 ********* 2025-05-14 02:04:07.827791 | orchestrator | ok: [testbed-manager] 2025-05-14 02:04:07.829290 | orchestrator | 2025-05-14 02:04:07.829768 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-14 02:04:07.830191 | orchestrator | Wednesday 14 May 2025 02:04:07 +0000 (0:00:03.745) 0:00:03.839 ********* 2025-05-14 02:04:07.888814 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:04:07.888884 | orchestrator | 2025-05-14 02:04:07.889371 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-14 02:04:07.890420 | orchestrator | Wednesday 14 May 2025 02:04:07 +0000 (0:00:00.062) 0:00:03.902 ********* 2025-05-14 02:04:07.990384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-14 02:04:07.990505 | orchestrator | 2025-05-14 02:04:07.990743 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-14 02:04:07.991285 | orchestrator | Wednesday 14 May 2025 02:04:07 +0000 (0:00:00.098) 0:00:04.000 ********* 2025-05-14 02:04:08.058082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 02:04:08.058139 | orchestrator | 2025-05-14 02:04:08.058152 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-14 02:04:08.058422 | orchestrator | Wednesday 14 May 2025 02:04:08 +0000 (0:00:00.070) 0:00:04.070 ********* 2025-05-14 02:04:09.155565 | orchestrator | ok: [testbed-manager] 2025-05-14 02:04:09.155697 | orchestrator | 2025-05-14 02:04:09.155713 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-14 02:04:09.155828 | orchestrator | Wednesday 14 May 2025 02:04:09 +0000 (0:00:01.095) 0:00:05.166 ********* 2025-05-14 02:04:09.205247 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:04:09.205825 | orchestrator | 2025-05-14 02:04:09.207061 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-14 02:04:09.207088 | orchestrator | Wednesday 14 May 2025 02:04:09 +0000 (0:00:00.051) 0:00:05.218 ********* 2025-05-14 02:04:09.707054 | orchestrator | ok: [testbed-manager] 2025-05-14 02:04:09.707168 | orchestrator | 2025-05-14 02:04:09.707748 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-14 02:04:09.708744 | orchestrator | Wednesday 14 May 2025 02:04:09 +0000 (0:00:00.502) 0:00:05.720 ********* 2025-05-14 02:04:09.787853 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:04:09.789127 | orchestrator | 2025-05-14 02:04:09.789156 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-14 02:04:09.789403 | orchestrator | Wednesday 14 May 2025 02:04:09 +0000 (0:00:00.080) 0:00:05.801 ********* 2025-05-14 02:04:10.384939 | orchestrator | changed: [testbed-manager] 2025-05-14 02:04:10.385065 | orchestrator | 2025-05-14 02:04:10.385893 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-14 02:04:10.386706 | orchestrator | Wednesday 14 May 2025 02:04:10 +0000 (0:00:00.594) 0:00:06.395 ********* 2025-05-14 02:04:11.484490 | orchestrator | changed: [testbed-manager] 2025-05-14 02:04:11.484636 | orchestrator | 2025-05-14 02:04:11.484914 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-14 02:04:11.485547 | orchestrator | Wednesday 14 May 2025 02:04:11 +0000 (0:00:01.099) 0:00:07.495 ********* 2025-05-14 02:04:12.473860 | orchestrator | ok: [testbed-manager] 2025-05-14 02:04:12.473991 | orchestrator | 2025-05-14 02:04:12.474989 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-14 02:04:12.475728 | orchestrator | Wednesday 14 May 2025 02:04:12 +0000 (0:00:00.988) 0:00:08.484 ********* 2025-05-14 02:04:12.556711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-14 02:04:12.556816 | orchestrator | 2025-05-14 02:04:12.556830 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-14 02:04:12.557487 | orchestrator | Wednesday 14 May 2025 02:04:12 +0000 (0:00:00.084) 0:00:08.569 ********* 2025-05-14 02:04:13.795640 | orchestrator | changed: [testbed-manager] 2025-05-14 02:04:13.795786 | orchestrator | 2025-05-14 02:04:13.796190 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:04:13.796746 | orchestrator | 2025-05-14 02:04:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:04:13.796946 | orchestrator | 2025-05-14 02:04:13 | INFO  | Please wait and do not abort execution. 2025-05-14 02:04:13.798068 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:04:13.798900 | orchestrator | 2025-05-14 02:04:13.800240 | orchestrator | Wednesday 14 May 2025 02:04:13 +0000 (0:00:01.236) 0:00:09.805 ********* 2025-05-14 02:04:13.801222 | orchestrator | =============================================================================== 2025-05-14 02:04:13.802449 | orchestrator | Gathering Facts --------------------------------------------------------- 3.75s 2025-05-14 02:04:13.803359 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.24s 2025-05-14 02:04:13.804423 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2025-05-14 02:04:13.805441 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.10s 2025-05-14 02:04:13.805645 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2025-05-14 02:04:13.806402 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.59s 2025-05-14 02:04:13.807226 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-05-14 02:04:13.807994 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-05-14 02:04:13.808857 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-14 02:04:13.809304 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-14 02:04:13.809905 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-05-14 02:04:13.810386 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-05-14 02:04:13.810773 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-05-14 02:04:14.197028 | orchestrator | + osism apply sshconfig 2025-05-14 02:04:15.615549 | orchestrator | 2025-05-14 02:04:15 | INFO  | Task cf4b6b5f-aaa0-4824-8d6b-3355b059be57 (sshconfig) was prepared for execution. 2025-05-14 02:04:15.615678 | orchestrator | 2025-05-14 02:04:15 | INFO  | It takes a moment until task cf4b6b5f-aaa0-4824-8d6b-3355b059be57 (sshconfig) has been started and output is visible here. 2025-05-14 02:04:18.621531 | orchestrator | 2025-05-14 02:04:18.622467 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-14 02:04:18.622819 | orchestrator | 2025-05-14 02:04:18.623756 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-14 02:04:18.624584 | orchestrator | Wednesday 14 May 2025 02:04:18 +0000 (0:00:00.106) 0:00:00.106 ********* 2025-05-14 02:04:19.220748 | orchestrator | ok: [testbed-manager] 2025-05-14 02:04:19.220875 | orchestrator | 2025-05-14 02:04:19.221486 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-14 02:04:19.221852 | orchestrator | Wednesday 14 May 2025 02:04:19 +0000 (0:00:00.600) 0:00:00.707 ********* 2025-05-14 02:04:19.714230 | orchestrator | changed: [testbed-manager] 2025-05-14 02:04:19.714460 | orchestrator | 2025-05-14 02:04:19.714478 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-14 02:04:19.714505 | orchestrator | Wednesday 14 May 2025 02:04:19 +0000 (0:00:00.492) 0:00:01.200 ********* 2025-05-14 02:04:25.340415 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-14 02:04:25.340561 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-14 02:04:25.340889 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-14 02:04:25.341552 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-14 02:04:25.342091 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-14 02:04:25.342662 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-14 02:04:25.343108 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-14 02:04:25.343684 | orchestrator | 2025-05-14 02:04:25.344441 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-14 02:04:25.344706 | orchestrator | Wednesday 14 May 2025 02:04:25 +0000 (0:00:05.626) 0:00:06.826 ********* 2025-05-14 02:04:25.416891 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:04:25.417010 | orchestrator | 2025-05-14 02:04:25.417681 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-14 02:04:25.418241 | orchestrator | Wednesday 14 May 2025 02:04:25 +0000 (0:00:00.077) 0:00:06.903 ********* 2025-05-14 02:04:25.959784 | orchestrator | changed: [testbed-manager] 2025-05-14 02:04:25.960537 | orchestrator | 2025-05-14 02:04:25.960861 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:04:25.961253 | orchestrator | 2025-05-14 02:04:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:04:25.961480 | orchestrator | 2025-05-14 02:04:25 | INFO  | Please wait and do not abort execution. 2025-05-14 02:04:25.962596 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:04:25.963614 | orchestrator | 2025-05-14 02:04:25.964498 | orchestrator | Wednesday 14 May 2025 02:04:25 +0000 (0:00:00.544) 0:00:07.448 ********* 2025-05-14 02:04:25.965483 | orchestrator | =============================================================================== 2025-05-14 02:04:25.966419 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.63s 2025-05-14 02:04:25.967092 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-05-14 02:04:25.967576 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2025-05-14 02:04:25.968982 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-05-14 02:04:25.969735 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-05-14 02:04:26.260460 | orchestrator | + osism apply known-hosts 2025-05-14 02:04:27.565076 | orchestrator | 2025-05-14 02:04:27 | INFO  | Task 4b4e639e-6878-4c3b-8f7d-33db9692042e (known-hosts) was prepared for execution. 2025-05-14 02:04:27.565175 | orchestrator | 2025-05-14 02:04:27 | INFO  | It takes a moment until task 4b4e639e-6878-4c3b-8f7d-33db9692042e (known-hosts) has been started and output is visible here. 2025-05-14 02:04:30.495557 | orchestrator | 2025-05-14 02:04:30.495688 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-14 02:04:30.497924 | orchestrator | 2025-05-14 02:04:30.498416 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-14 02:04:30.499075 | orchestrator | Wednesday 14 May 2025 02:04:30 +0000 (0:00:00.119) 0:00:00.119 ********* 2025-05-14 02:04:36.585096 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-14 02:04:36.586304 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-14 02:04:36.586434 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-14 02:04:36.586574 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-14 02:04:36.586820 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-14 02:04:36.587064 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-14 02:04:36.587157 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-14 02:04:36.587477 | orchestrator | 2025-05-14 02:04:36.587576 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-14 02:04:36.587868 | orchestrator | Wednesday 14 May 2025 02:04:36 +0000 (0:00:06.089) 0:00:06.208 ********* 2025-05-14 02:04:36.762302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-14 02:04:36.763376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-14 02:04:36.765195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-14 02:04:36.765219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-14 02:04:36.766078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-14 02:04:36.766857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-14 02:04:36.767727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-14 02:04:36.767954 | orchestrator | 2025-05-14 02:04:36.768499 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:36.768952 | orchestrator | Wednesday 14 May 2025 02:04:36 +0000 (0:00:00.180) 0:00:06.389 ********* 2025-05-14 02:04:37.986249 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCl5li2UNvSMF5tg+cgqfLfqpvYg9Dw9GuiFEW7OWsCFZ5nVA+JDkoc2YkHm2ZaEk+JE8mlzd30w89f07zs6Lenh8j1RjMqe71XDcdQf5HsZ967bgJ5ylCzFWs+cohTI8B8MdHnUaot5DXtad6ONGXgykr5eQgW0iGtxBXhdGd5POXVkUl0R4i+fkm11HHQtvi7evzs0/2iqeKVqwJPJunTEk1AOQMKWBt5gGcQ6BuFIgPkr24HAA3QAsc+o5OyXlprWN3/urO3VN8HfgUI3WDmP8wj+7Ch83sH0kqPvr8zam9tma8PfrYwRXnpOml4CeY+D4TNWT7k6FBecoNegdk/R6pIyz35df7eoHYttfCbYQhu71PJhfov4n5yR5g2RxD3FkU3Us72zT4hdi2vkZulgZZzF2WbvST5dbMP8UaZhsETnl2iK5tLB75ZGCQUTNXLVEsmcXAWXRyecMYC/+pyG/D/uXazqCYGPLEZ2kAucZqWqTMYRBohWVBH1Fv8Uh0=) 2025-05-14 02:04:37.986412 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKMZ94TnbFIgHurddNCUwI8LmWRwL2hYVE8i3kP8U1bX64N4l9iWpi/EK5LBKn0vW3JbvkHNYN5lUw9A/KZ4nv0=) 2025-05-14 02:04:37.986517 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINjnQDs+P1uG+sJg/QiufsbD+lp5GN9Muu4+E+i1lna+) 2025-05-14 02:04:37.987036 | orchestrator | 2025-05-14 02:04:37.987427 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:37.988461 | orchestrator | Wednesday 14 May 2025 02:04:37 +0000 (0:00:01.221) 0:00:07.610 ********* 2025-05-14 02:04:39.112214 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVAQNJbFz9vS6nNffF+TK8hhKa3fd0ZZqh5P1EBiYiEpKd3qlzwrU38THYoPjZI0zfVpY3bvMGAyg55ifdb0yGYwcA66ma2RBUtY2h7lWZ92oTS0cF9fHX8+Qrx6X1sT4Cb3PKaRmtzNgWzZrpTqVQLmc+FKQ5NupDgxDGnZS4lxsn6AxeXyAg82JQJgZnBaqUFiviIcxr4XV7hfpjGW8Co11/8Fk4J+gu2rFLQDH85K95E7shRaz+/a0hVCIWRPwrCrfmrIugpwBHSzOIobG1eG5VYZjd/KK+X+ku6xkwMIDe2Abcvve/9SqWdGw60/kttaa+RZxIYT5DcKUAeHhFTUww+JZmr3E4Lz0hEeXYomg9JIbU3nr81xtynlU1QvFQed+gFtIWeFuImOaNRHxF5rUb0WnqmcUoR4EuICFiPbyTYdIQ2O08gAqX8GZMmLER3Dc7t7Fm83jGnGLBzUYjrQILZBmJg8nF9XGYkWW4RtWpGtDmejXvIi7KVbBvmIs=) 2025-05-14 02:04:39.113032 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF97JmWt6G3UMKCB/CL5fzb7XLaKEtqu4K3AS1MHLGXJqiGyhOmHTI/kbOQU5JrPVTRY876CEzuj58SQA4eDpZc=) 2025-05-14 02:04:39.114175 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILMPza/J1powWKGbYtU1DnEul7JrTOFF6CigwO+6hMcB) 2025-05-14 02:04:39.114708 | orchestrator | 2025-05-14 02:04:39.115069 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:39.115518 | orchestrator | Wednesday 14 May 2025 02:04:39 +0000 (0:00:01.126) 0:00:08.737 ********* 2025-05-14 02:04:40.158646 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLuOchFdSqdWjChtpBNtXPeHX1Sr9Er5sYSAI1ZmrNDylbyirA1Frl1IEVDySh3EBhNI7yk5AmpYa7Zqtsy/FCyZPIX//2tixAwX1dbMJJoQOmf58wJDyK9FTuYZeKM+Gr0oxuEMlu8RqyhSq0tcsjym5viYbbhC8Wkosse98vpZbdiAXg8905VZ347fy9MmgU1Tg5Lm31PZFswjdbTn047z8L0D7KE6cSbh75E1awkVUtYrqBIGebRjaPQEtwcPHABwAGV7/IToPX8QKB1qerKMARbkj3sr7fujiu1ZDOv6lEwzV/fsVAr4GzbEC627OLC8G7aEmVDWJzEE6Rvs5aPcDfldTpD3Ki4E1i9/UWFoVqii+N5DBaytS1xbwHKu6XnNLj8zCs6IwyN5MObUesZLZ20ZzscVLCUeh1qdOD6xM/SAkPgWkvXd/WNxX59qGP53P2Vp595ogIofkDp6SPJ0XhgvpukKKNoeoA4toK2x1mkOxuBUauvNWOXJAEunc=) 2025-05-14 02:04:40.158733 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNfabl8p7CeYBhXTGZrp0QO93Am9TlkxODhjh1BLRyY7i0+ntcwZAM80nMO7It7u+jbWk6MfBKBIvUZghpZUFdg=) 2025-05-14 02:04:40.158746 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBvzF4Al3K59lWTeriotOnMw/GhBgl4PDj1ekgqn9UbK) 2025-05-14 02:04:40.159179 | orchestrator | 2025-05-14 02:04:40.159683 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:40.160082 | orchestrator | Wednesday 14 May 2025 02:04:40 +0000 (0:00:01.047) 0:00:09.784 ********* 2025-05-14 02:04:41.241888 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjrhrWg70sW36t3qHOVw35WCe+otCJxNIm/9xwUiODWs5kv+hFDWt3amd0Y1wNBGfqH6s2zdHA38iuTag4HuB2i9AR7FAkFIBhUC+dno3HNlKN0nOLyS7TA9q3KgW7T+LrD8Ky01at+TAbp1RKBujELuk82UE6dyrfA3TMVbq4nsvP3JfmxyYDyq9e6sXDmVYnyUc0o/uzka1fgXjxnPI0oJ6SqUP+MbSJfMgQCKBdY+mYF9Gu4zyaOHgz6nPIkfzSu92OwTczpHIeLxPJ05zsSA2M6VwhTymigl13piFn6sm+ZbSQOf14+RYFaeafjGcm2g4s2YmULoBe6sHeUfm3oYGsLcOIjy1eJRCBHpz54/wguGzq5FR1EnPuEFJzzLP1cwMH4Elm5V1teQM5DRitIRrWiY4bxyDwlXXeu5xi5KBITZPr3XzBA5fX9SwbN9RbU8Y/3GAtwoanxUiTwDjtFYZnSy93T+dmUmxWL7sizQFwhcVycfKMPxqYVoxS80E=) 2025-05-14 02:04:41.242087 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGfslH0sGGhjKGw/NXPT+qAfl2Y0YWyAkE7xaqSmybO4C3t9tDurcSTxDxc29LiEBuFNYvnHk6A2wglEEP3+gYc=) 2025-05-14 02:04:41.242847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJB0FxtdLq4qvV76zhduhxY5hHVhLmamjIjxwcYRvXuf) 2025-05-14 02:04:41.245303 | orchestrator | 2025-05-14 02:04:41.245444 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:41.245464 | orchestrator | Wednesday 14 May 2025 02:04:41 +0000 (0:00:01.083) 0:00:10.867 ********* 2025-05-14 02:04:42.348028 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTqyGZ91EgL+9eu4VpQwDOqeuFikqYsI1LwVVWgjXIw6a8kTyu/U5sSUaiuxMTWZ9zTf+tyGJ87+PRL+L67A181oSzlna/z3iLlz/l4E/KqpKWmTMnrCZlRAxBTfrUbDGLkQcFYlpXqZtsZOiw9GjrgGfXq/KDBFomjh+dOkwqn7gvWwdEqpZEhLsjxqQlRLs2EyE3S3HXR2E4kN5BRRo8YMto0ADvGhUHSrPaZgPALu0ieDYs7onCIwneXjI/4ptJsvn7zZl5fI+836V0dmj2RG6WOCpFlEXXIZ1tqzcSR768Q8OYedfPrMlgBB0mYDSCj/Ax/HEhFMFFTHvRldRFNKbTTVozY6F3Y350vkI7Ip6bdpVtksLt8y57IcMOfWXtqzWcMJmi4oSGrQhUbZcA2V8UZtZjvDJqV46w1qtlHG/2zu93ZOVBT5jVmbAE8JXZwQNFC3iMlz0BsArJ7a1y+uXCRXq0le3obxiB8YCbLXKLuSpSfDD81LAOcI8qRv0=) 2025-05-14 02:04:42.348255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIZ1+munnaUmtlYaRwEIiNA45zdf9PR4v0xtgSx82HAggxAc3+OYMDVqzhXO8CwrgspIdEacHyxiCZZzGnPDt20=) 2025-05-14 02:04:42.348829 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHPxe8HiEfbygkBEWiPEfMCWJTYpRP7TAD/B0BR6S5hz) 2025-05-14 02:04:42.349699 | orchestrator | 2025-05-14 02:04:42.350542 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:42.351335 | orchestrator | Wednesday 14 May 2025 02:04:42 +0000 (0:00:01.105) 0:00:11.973 ********* 2025-05-14 02:04:43.439489 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvfNqwA2vD09s59s5YjiGqkL06HnLUTXcGv9SL/nVxy4xBo6+eink1U071C+x1pqZQieEIpe6pxKFs5tlfUly7jVmJnD4CV8PGqouSRHnj3Jw59jzvOiVXgNOBPkmw5orj1oVTBOPdBDH3m08wK4Evr8wJpI2eil/x00z8lqhXIrTFkv2q9po/o6Ekd9XNDsIuvfHMTDAOX2e7WkBHs5eD0vHWVpsS5p4WOvYrAMfZmmSCGtp5hLamHUG5gDVGBJxOzHJ5Kk736eo1HtAAe49NxVWqeM0Y2PZyb4oZYMbfb0Xgr9BBNUe+2PvS1i8cn9P2L+8BOuuz5erGRZOaXU/31C98d3Fy/V+pG1hfHs5MtLpRQUJHWN6l6WG2NY85Fhqc5YMJvyWl5MTlMCeT4PJfETutMx85tqSae2tcn3vsdFcCpjEq+bvHxYTKouD0glnntA4wkb3/3F6SctOGhZTi6CQ2u/FvIpUh/zdvi8LB1VthjxeFWFNe1A6Uk2fGK80=) 2025-05-14 02:04:43.439790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD9yqytLMrdWoCw1kdjlLg9q9iil61uXfCjaJjKHCl5ply5o4+dciWg/L0a8E8jmEN2XC25GxjuskxaJ0dJ11j8=) 2025-05-14 02:04:43.440723 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGInKTEe/VRcHbNRXbB/iRKUzMmp98/dNJ3LjRvtVfti) 2025-05-14 02:04:43.441175 | orchestrator | 2025-05-14 02:04:43.441697 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:43.442468 | orchestrator | Wednesday 14 May 2025 02:04:43 +0000 (0:00:01.091) 0:00:13.064 ********* 2025-05-14 02:04:44.525111 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7Weyb69MNoZrnhdLDq3ZG+n90/Bx1eEZUiSkUWtrVXDgxcPwd0+v8Pg0WhS2LvZx2cOgjHvOsg0nnlrdUGn0UvRN0PEafj0r9xIoqJXZ+8C7VaK1LwkAecvj+vBb0l6wgXK8nKz2358gZpnKlEiWdKdgAJCHHuMXF5TYdn3UmC4GZFqXfE9o59WSZ/k8ByMqK6uibTUAZaJuD0hBIjtj9xqrREBugZwj9U2fknPkySDaOp2pd5kegOLnWxrdhX6JM2KjY0TX3ctoEKJ9y+tlmWlcUEZ73C+FhoJtEfVoMl2xpczrEYweAz8mGbmWNohbPGqTMKSBxqy4mumLaxN+VhTGoLVFxq1eJq87OwKTNXqTOfDj82mwLKdKevhw2uvk3tjcQpAjAvQFj2iNwhnVdcYeiSx/neJc7T+X2cj2Frrw//xr3aJaHjH9ly3nWYOnj65CvkHlY05io2mr/86i0qIKmEeP+6bkm1QhJOZDPGtPQWDsIe8Bor5AJdzlY2i8=) 2025-05-14 02:04:44.525458 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKuS7fFceakUccqNPLfeQiZATaNMcUq3f1nF9snckQax1RRjSAgS0EN6PQdIYZL/MW4Pu7deid1Zpvtz9RAwIP8=) 2025-05-14 02:04:44.526177 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJv4Q/Rbwenm2Mk7cHH1G4HCi2VKtZiRFOtSKRphr21e) 2025-05-14 02:04:44.526498 | orchestrator | 2025-05-14 02:04:44.526787 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-14 02:04:44.527891 | orchestrator | Wednesday 14 May 2025 02:04:44 +0000 (0:00:01.084) 0:00:14.149 ********* 2025-05-14 02:04:49.862256 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-14 02:04:49.862424 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-14 02:04:49.862707 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-14 02:04:49.863320 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-14 02:04:49.865019 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-14 02:04:49.867058 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-14 02:04:49.867550 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-14 02:04:49.868224 | orchestrator | 2025-05-14 02:04:49.868699 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-14 02:04:49.869214 | orchestrator | Wednesday 14 May 2025 02:04:49 +0000 (0:00:05.337) 0:00:19.486 ********* 2025-05-14 02:04:50.033949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-14 02:04:50.034110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-14 02:04:50.034800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-14 02:04:50.035480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-14 02:04:50.036604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-14 02:04:50.037883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-14 02:04:50.038630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-14 02:04:50.039570 | orchestrator | 2025-05-14 02:04:50.040012 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:50.040894 | orchestrator | Wednesday 14 May 2025 02:04:50 +0000 (0:00:00.174) 0:00:19.660 ********* 2025-05-14 02:04:51.118760 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINjnQDs+P1uG+sJg/QiufsbD+lp5GN9Muu4+E+i1lna+) 2025-05-14 02:04:51.118870 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCl5li2UNvSMF5tg+cgqfLfqpvYg9Dw9GuiFEW7OWsCFZ5nVA+JDkoc2YkHm2ZaEk+JE8mlzd30w89f07zs6Lenh8j1RjMqe71XDcdQf5HsZ967bgJ5ylCzFWs+cohTI8B8MdHnUaot5DXtad6ONGXgykr5eQgW0iGtxBXhdGd5POXVkUl0R4i+fkm11HHQtvi7evzs0/2iqeKVqwJPJunTEk1AOQMKWBt5gGcQ6BuFIgPkr24HAA3QAsc+o5OyXlprWN3/urO3VN8HfgUI3WDmP8wj+7Ch83sH0kqPvr8zam9tma8PfrYwRXnpOml4CeY+D4TNWT7k6FBecoNegdk/R6pIyz35df7eoHYttfCbYQhu71PJhfov4n5yR5g2RxD3FkU3Us72zT4hdi2vkZulgZZzF2WbvST5dbMP8UaZhsETnl2iK5tLB75ZGCQUTNXLVEsmcXAWXRyecMYC/+pyG/D/uXazqCYGPLEZ2kAucZqWqTMYRBohWVBH1Fv8Uh0=) 2025-05-14 02:04:51.118890 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKMZ94TnbFIgHurddNCUwI8LmWRwL2hYVE8i3kP8U1bX64N4l9iWpi/EK5LBKn0vW3JbvkHNYN5lUw9A/KZ4nv0=) 2025-05-14 02:04:51.118904 | orchestrator | 2025-05-14 02:04:51.119132 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:51.119263 | orchestrator | Wednesday 14 May 2025 02:04:51 +0000 (0:00:01.081) 0:00:20.742 ********* 2025-05-14 02:04:52.183625 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF97JmWt6G3UMKCB/CL5fzb7XLaKEtqu4K3AS1MHLGXJqiGyhOmHTI/kbOQU5JrPVTRY876CEzuj58SQA4eDpZc=) 2025-05-14 02:04:52.185150 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVAQNJbFz9vS6nNffF+TK8hhKa3fd0ZZqh5P1EBiYiEpKd3qlzwrU38THYoPjZI0zfVpY3bvMGAyg55ifdb0yGYwcA66ma2RBUtY2h7lWZ92oTS0cF9fHX8+Qrx6X1sT4Cb3PKaRmtzNgWzZrpTqVQLmc+FKQ5NupDgxDGnZS4lxsn6AxeXyAg82JQJgZnBaqUFiviIcxr4XV7hfpjGW8Co11/8Fk4J+gu2rFLQDH85K95E7shRaz+/a0hVCIWRPwrCrfmrIugpwBHSzOIobG1eG5VYZjd/KK+X+ku6xkwMIDe2Abcvve/9SqWdGw60/kttaa+RZxIYT5DcKUAeHhFTUww+JZmr3E4Lz0hEeXYomg9JIbU3nr81xtynlU1QvFQed+gFtIWeFuImOaNRHxF5rUb0WnqmcUoR4EuICFiPbyTYdIQ2O08gAqX8GZMmLER3Dc7t7Fm83jGnGLBzUYjrQILZBmJg8nF9XGYkWW4RtWpGtDmejXvIi7KVbBvmIs=) 2025-05-14 02:04:52.185189 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILMPza/J1powWKGbYtU1DnEul7JrTOFF6CigwO+6hMcB) 2025-05-14 02:04:52.186149 | orchestrator | 2025-05-14 02:04:52.186904 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:52.187191 | orchestrator | Wednesday 14 May 2025 02:04:52 +0000 (0:00:01.067) 0:00:21.810 ********* 2025-05-14 02:04:53.254992 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNfabl8p7CeYBhXTGZrp0QO93Am9TlkxODhjh1BLRyY7i0+ntcwZAM80nMO7It7u+jbWk6MfBKBIvUZghpZUFdg=) 2025-05-14 02:04:53.255863 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBvzF4Al3K59lWTeriotOnMw/GhBgl4PDj1ekgqn9UbK) 2025-05-14 02:04:53.256749 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLuOchFdSqdWjChtpBNtXPeHX1Sr9Er5sYSAI1ZmrNDylbyirA1Frl1IEVDySh3EBhNI7yk5AmpYa7Zqtsy/FCyZPIX//2tixAwX1dbMJJoQOmf58wJDyK9FTuYZeKM+Gr0oxuEMlu8RqyhSq0tcsjym5viYbbhC8Wkosse98vpZbdiAXg8905VZ347fy9MmgU1Tg5Lm31PZFswjdbTn047z8L0D7KE6cSbh75E1awkVUtYrqBIGebRjaPQEtwcPHABwAGV7/IToPX8QKB1qerKMARbkj3sr7fujiu1ZDOv6lEwzV/fsVAr4GzbEC627OLC8G7aEmVDWJzEE6Rvs5aPcDfldTpD3Ki4E1i9/UWFoVqii+N5DBaytS1xbwHKu6XnNLj8zCs6IwyN5MObUesZLZ20ZzscVLCUeh1qdOD6xM/SAkPgWkvXd/WNxX59qGP53P2Vp595ogIofkDp6SPJ0XhgvpukKKNoeoA4toK2x1mkOxuBUauvNWOXJAEunc=) 2025-05-14 02:04:53.257244 | orchestrator | 2025-05-14 02:04:53.257931 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:53.258665 | orchestrator | Wednesday 14 May 2025 02:04:53 +0000 (0:00:01.070) 0:00:22.880 ********* 2025-05-14 02:04:54.413837 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJB0FxtdLq4qvV76zhduhxY5hHVhLmamjIjxwcYRvXuf) 2025-05-14 02:04:54.414902 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjrhrWg70sW36t3qHOVw35WCe+otCJxNIm/9xwUiODWs5kv+hFDWt3amd0Y1wNBGfqH6s2zdHA38iuTag4HuB2i9AR7FAkFIBhUC+dno3HNlKN0nOLyS7TA9q3KgW7T+LrD8Ky01at+TAbp1RKBujELuk82UE6dyrfA3TMVbq4nsvP3JfmxyYDyq9e6sXDmVYnyUc0o/uzka1fgXjxnPI0oJ6SqUP+MbSJfMgQCKBdY+mYF9Gu4zyaOHgz6nPIkfzSu92OwTczpHIeLxPJ05zsSA2M6VwhTymigl13piFn6sm+ZbSQOf14+RYFaeafjGcm2g4s2YmULoBe6sHeUfm3oYGsLcOIjy1eJRCBHpz54/wguGzq5FR1EnPuEFJzzLP1cwMH4Elm5V1teQM5DRitIRrWiY4bxyDwlXXeu5xi5KBITZPr3XzBA5fX9SwbN9RbU8Y/3GAtwoanxUiTwDjtFYZnSy93T+dmUmxWL7sizQFwhcVycfKMPxqYVoxS80E=) 2025-05-14 02:04:54.414941 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGfslH0sGGhjKGw/NXPT+qAfl2Y0YWyAkE7xaqSmybO4C3t9tDurcSTxDxc29LiEBuFNYvnHk6A2wglEEP3+gYc=) 2025-05-14 02:04:54.415195 | orchestrator | 2025-05-14 02:04:54.415930 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:54.416466 | orchestrator | Wednesday 14 May 2025 02:04:54 +0000 (0:00:01.158) 0:00:24.039 ********* 2025-05-14 02:04:55.561312 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIZ1+munnaUmtlYaRwEIiNA45zdf9PR4v0xtgSx82HAggxAc3+OYMDVqzhXO8CwrgspIdEacHyxiCZZzGnPDt20=) 2025-05-14 02:04:55.561457 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTqyGZ91EgL+9eu4VpQwDOqeuFikqYsI1LwVVWgjXIw6a8kTyu/U5sSUaiuxMTWZ9zTf+tyGJ87+PRL+L67A181oSzlna/z3iLlz/l4E/KqpKWmTMnrCZlRAxBTfrUbDGLkQcFYlpXqZtsZOiw9GjrgGfXq/KDBFomjh+dOkwqn7gvWwdEqpZEhLsjxqQlRLs2EyE3S3HXR2E4kN5BRRo8YMto0ADvGhUHSrPaZgPALu0ieDYs7onCIwneXjI/4ptJsvn7zZl5fI+836V0dmj2RG6WOCpFlEXXIZ1tqzcSR768Q8OYedfPrMlgBB0mYDSCj/Ax/HEhFMFFTHvRldRFNKbTTVozY6F3Y350vkI7Ip6bdpVtksLt8y57IcMOfWXtqzWcMJmi4oSGrQhUbZcA2V8UZtZjvDJqV46w1qtlHG/2zu93ZOVBT5jVmbAE8JXZwQNFC3iMlz0BsArJ7a1y+uXCRXq0le3obxiB8YCbLXKLuSpSfDD81LAOcI8qRv0=) 2025-05-14 02:04:55.563057 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHPxe8HiEfbygkBEWiPEfMCWJTYpRP7TAD/B0BR6S5hz) 2025-05-14 02:04:55.563303 | orchestrator | 2025-05-14 02:04:55.563807 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:55.564409 | orchestrator | Wednesday 14 May 2025 02:04:55 +0000 (0:00:01.145) 0:00:25.185 ********* 2025-05-14 02:04:56.700488 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvfNqwA2vD09s59s5YjiGqkL06HnLUTXcGv9SL/nVxy4xBo6+eink1U071C+x1pqZQieEIpe6pxKFs5tlfUly7jVmJnD4CV8PGqouSRHnj3Jw59jzvOiVXgNOBPkmw5orj1oVTBOPdBDH3m08wK4Evr8wJpI2eil/x00z8lqhXIrTFkv2q9po/o6Ekd9XNDsIuvfHMTDAOX2e7WkBHs5eD0vHWVpsS5p4WOvYrAMfZmmSCGtp5hLamHUG5gDVGBJxOzHJ5Kk736eo1HtAAe49NxVWqeM0Y2PZyb4oZYMbfb0Xgr9BBNUe+2PvS1i8cn9P2L+8BOuuz5erGRZOaXU/31C98d3Fy/V+pG1hfHs5MtLpRQUJHWN6l6WG2NY85Fhqc5YMJvyWl5MTlMCeT4PJfETutMx85tqSae2tcn3vsdFcCpjEq+bvHxYTKouD0glnntA4wkb3/3F6SctOGhZTi6CQ2u/FvIpUh/zdvi8LB1VthjxeFWFNe1A6Uk2fGK80=) 2025-05-14 02:04:56.702945 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD9yqytLMrdWoCw1kdjlLg9q9iil61uXfCjaJjKHCl5ply5o4+dciWg/L0a8E8jmEN2XC25GxjuskxaJ0dJ11j8=) 2025-05-14 02:04:56.703710 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGInKTEe/VRcHbNRXbB/iRKUzMmp98/dNJ3LjRvtVfti) 2025-05-14 02:04:56.704497 | orchestrator | 2025-05-14 02:04:56.705687 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:04:56.706588 | orchestrator | Wednesday 14 May 2025 02:04:56 +0000 (0:00:01.139) 0:00:26.324 ********* 2025-05-14 02:04:57.803707 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJv4Q/Rbwenm2Mk7cHH1G4HCi2VKtZiRFOtSKRphr21e) 2025-05-14 02:04:57.803980 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7Weyb69MNoZrnhdLDq3ZG+n90/Bx1eEZUiSkUWtrVXDgxcPwd0+v8Pg0WhS2LvZx2cOgjHvOsg0nnlrdUGn0UvRN0PEafj0r9xIoqJXZ+8C7VaK1LwkAecvj+vBb0l6wgXK8nKz2358gZpnKlEiWdKdgAJCHHuMXF5TYdn3UmC4GZFqXfE9o59WSZ/k8ByMqK6uibTUAZaJuD0hBIjtj9xqrREBugZwj9U2fknPkySDaOp2pd5kegOLnWxrdhX6JM2KjY0TX3ctoEKJ9y+tlmWlcUEZ73C+FhoJtEfVoMl2xpczrEYweAz8mGbmWNohbPGqTMKSBxqy4mumLaxN+VhTGoLVFxq1eJq87OwKTNXqTOfDj82mwLKdKevhw2uvk3tjcQpAjAvQFj2iNwhnVdcYeiSx/neJc7T+X2cj2Frrw//xr3aJaHjH9ly3nWYOnj65CvkHlY05io2mr/86i0qIKmEeP+6bkm1QhJOZDPGtPQWDsIe8Bor5AJdzlY2i8=) 2025-05-14 02:04:57.804657 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKuS7fFceakUccqNPLfeQiZATaNMcUq3f1nF9snckQax1RRjSAgS0EN6PQdIYZL/MW4Pu7deid1Zpvtz9RAwIP8=) 2025-05-14 02:04:57.805809 | orchestrator | 2025-05-14 02:04:57.807019 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-14 02:04:57.807681 | orchestrator | Wednesday 14 May 2025 02:04:57 +0000 (0:00:01.103) 0:00:27.428 ********* 2025-05-14 02:04:57.970507 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-14 02:04:57.970645 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-14 02:04:57.972466 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-14 02:04:57.973084 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-14 02:04:57.973485 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-14 02:04:57.974121 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-14 02:04:57.974920 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-14 02:04:57.975052 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:04:57.975568 | orchestrator | 2025-05-14 02:04:57.976326 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-14 02:04:57.976350 | orchestrator | Wednesday 14 May 2025 02:04:57 +0000 (0:00:00.167) 0:00:27.596 ********* 2025-05-14 02:04:58.033289 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:04:58.033584 | orchestrator | 2025-05-14 02:04:58.036144 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-14 02:04:58.037088 | orchestrator | Wednesday 14 May 2025 02:04:58 +0000 (0:00:00.063) 0:00:27.659 ********* 2025-05-14 02:04:58.091107 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:04:58.091852 | orchestrator | 2025-05-14 02:04:58.092864 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-14 02:04:58.093778 | orchestrator | Wednesday 14 May 2025 02:04:58 +0000 (0:00:00.058) 0:00:27.718 ********* 2025-05-14 02:04:58.876571 | orchestrator | changed: [testbed-manager] 2025-05-14 02:04:58.877441 | orchestrator | 2025-05-14 02:04:58.877564 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:04:58.877708 | orchestrator | 2025-05-14 02:04:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:04:58.877729 | orchestrator | 2025-05-14 02:04:58 | INFO  | Please wait and do not abort execution. 2025-05-14 02:04:58.878930 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:04:58.880033 | orchestrator | 2025-05-14 02:04:58.880541 | orchestrator | Wednesday 14 May 2025 02:04:58 +0000 (0:00:00.784) 0:00:28.503 ********* 2025-05-14 02:04:58.881489 | orchestrator | =============================================================================== 2025-05-14 02:04:58.882359 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.09s 2025-05-14 02:04:58.883122 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.34s 2025-05-14 02:04:58.884099 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-05-14 02:04:58.884341 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-05-14 02:04:58.884892 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-05-14 02:04:58.885611 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-14 02:04:58.886132 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-05-14 02:04:58.886573 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-14 02:04:58.886885 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-14 02:04:58.887580 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-14 02:04:58.887850 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-14 02:04:58.888236 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-14 02:04:58.888671 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-14 02:04:58.888871 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-14 02:04:58.889186 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-14 02:04:58.889660 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-14 02:04:58.889749 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.78s 2025-05-14 02:04:58.890185 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-05-14 02:04:58.890458 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-14 02:04:58.890854 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-05-14 02:04:59.281783 | orchestrator | + osism apply squid 2025-05-14 02:05:00.731828 | orchestrator | 2025-05-14 02:05:00 | INFO  | Task b93751b3-37ad-454a-bfcf-a3e7d6e2e85a (squid) was prepared for execution. 2025-05-14 02:05:00.731929 | orchestrator | 2025-05-14 02:05:00 | INFO  | It takes a moment until task b93751b3-37ad-454a-bfcf-a3e7d6e2e85a (squid) has been started and output is visible here. 2025-05-14 02:05:03.761492 | orchestrator | 2025-05-14 02:05:03.762473 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-14 02:05:03.762906 | orchestrator | 2025-05-14 02:05:03.763427 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-14 02:05:03.763866 | orchestrator | Wednesday 14 May 2025 02:05:03 +0000 (0:00:00.101) 0:00:00.101 ********* 2025-05-14 02:05:03.853471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 02:05:03.853569 | orchestrator | 2025-05-14 02:05:03.854936 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-14 02:05:03.854961 | orchestrator | Wednesday 14 May 2025 02:05:03 +0000 (0:00:00.094) 0:00:00.196 ********* 2025-05-14 02:05:05.056759 | orchestrator | ok: [testbed-manager] 2025-05-14 02:05:05.057151 | orchestrator | 2025-05-14 02:05:05.057846 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-14 02:05:05.058823 | orchestrator | Wednesday 14 May 2025 02:05:05 +0000 (0:00:01.200) 0:00:01.397 ********* 2025-05-14 02:05:06.126462 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-14 02:05:06.126545 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-14 02:05:06.126601 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-14 02:05:06.126847 | orchestrator | 2025-05-14 02:05:06.127002 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-14 02:05:06.127328 | orchestrator | Wednesday 14 May 2025 02:05:06 +0000 (0:00:01.070) 0:00:02.467 ********* 2025-05-14 02:05:07.153640 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-14 02:05:07.153742 | orchestrator | 2025-05-14 02:05:07.154115 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-14 02:05:07.154177 | orchestrator | Wednesday 14 May 2025 02:05:07 +0000 (0:00:01.028) 0:00:03.496 ********* 2025-05-14 02:05:07.507076 | orchestrator | ok: [testbed-manager] 2025-05-14 02:05:07.507857 | orchestrator | 2025-05-14 02:05:07.508782 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-14 02:05:07.509198 | orchestrator | Wednesday 14 May 2025 02:05:07 +0000 (0:00:00.352) 0:00:03.848 ********* 2025-05-14 02:05:08.485551 | orchestrator | changed: [testbed-manager] 2025-05-14 02:05:08.485901 | orchestrator | 2025-05-14 02:05:08.485932 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-14 02:05:08.486451 | orchestrator | Wednesday 14 May 2025 02:05:08 +0000 (0:00:00.973) 0:00:04.822 ********* 2025-05-14 02:05:40.272666 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-14 02:05:40.272785 | orchestrator | ok: [testbed-manager] 2025-05-14 02:05:40.272802 | orchestrator | 2025-05-14 02:05:40.273228 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-14 02:05:40.274549 | orchestrator | Wednesday 14 May 2025 02:05:40 +0000 (0:00:31.786) 0:00:36.609 ********* 2025-05-14 02:05:52.758115 | orchestrator | changed: [testbed-manager] 2025-05-14 02:05:52.758253 | orchestrator | 2025-05-14 02:05:52.760657 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-14 02:05:52.760760 | orchestrator | Wednesday 14 May 2025 02:05:52 +0000 (0:00:12.485) 0:00:49.095 ********* 2025-05-14 02:06:52.861691 | orchestrator | Pausing for 60 seconds 2025-05-14 02:06:52.861813 | orchestrator | changed: [testbed-manager] 2025-05-14 02:06:52.861832 | orchestrator | 2025-05-14 02:06:52.861918 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-14 02:06:52.862273 | orchestrator | Wednesday 14 May 2025 02:06:52 +0000 (0:01:00.103) 0:01:49.198 ********* 2025-05-14 02:06:52.925314 | orchestrator | ok: [testbed-manager] 2025-05-14 02:06:52.925490 | orchestrator | 2025-05-14 02:06:52.926421 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-14 02:06:52.926813 | orchestrator | Wednesday 14 May 2025 02:06:52 +0000 (0:00:00.066) 0:01:49.264 ********* 2025-05-14 02:06:53.525428 | orchestrator | changed: [testbed-manager] 2025-05-14 02:06:53.525857 | orchestrator | 2025-05-14 02:06:53.527642 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:06:53.527670 | orchestrator | 2025-05-14 02:06:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:06:53.527684 | orchestrator | 2025-05-14 02:06:53 | INFO  | Please wait and do not abort execution. 2025-05-14 02:06:53.529724 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:06:53.530795 | orchestrator | 2025-05-14 02:06:53.531542 | orchestrator | Wednesday 14 May 2025 02:06:53 +0000 (0:00:00.601) 0:01:49.866 ********* 2025-05-14 02:06:53.532705 | orchestrator | =============================================================================== 2025-05-14 02:06:53.533707 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2025-05-14 02:06:53.534848 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.79s 2025-05-14 02:06:53.535824 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.49s 2025-05-14 02:06:53.536693 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.20s 2025-05-14 02:06:53.537679 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.07s 2025-05-14 02:06:53.538131 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.03s 2025-05-14 02:06:53.538668 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2025-05-14 02:06:53.539805 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-05-14 02:06:53.540058 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-05-14 02:06:53.540694 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-14 02:06:53.541156 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-05-14 02:06:53.985599 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 02:06:53.985694 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-14 02:06:53.992175 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-14 02:06:54.053536 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-14 02:06:54.053630 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 02:06:54.053646 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-14 02:06:54.056984 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-14 02:06:54.059988 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-14 02:06:54.063868 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-14 02:06:55.536078 | orchestrator | 2025-05-14 02:06:55 | INFO  | Task f0a11a64-cb12-4df7-b613-962d6bf33d32 (operator) was prepared for execution. 2025-05-14 02:06:55.536201 | orchestrator | 2025-05-14 02:06:55 | INFO  | It takes a moment until task f0a11a64-cb12-4df7-b613-962d6bf33d32 (operator) has been started and output is visible here. 2025-05-14 02:06:58.531603 | orchestrator | 2025-05-14 02:06:58.531717 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-14 02:06:58.532105 | orchestrator | 2025-05-14 02:06:58.533547 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 02:06:58.535024 | orchestrator | Wednesday 14 May 2025 02:06:58 +0000 (0:00:00.088) 0:00:00.088 ********* 2025-05-14 02:07:01.842225 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:07:01.842457 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:01.843540 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:07:01.847122 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:01.848184 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:07:01.848542 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:01.849189 | orchestrator | 2025-05-14 02:07:01.849937 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-14 02:07:01.850262 | orchestrator | Wednesday 14 May 2025 02:07:01 +0000 (0:00:03.315) 0:00:03.403 ********* 2025-05-14 02:07:02.626766 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:02.626957 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:07:02.627791 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:07:02.629699 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:02.631289 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:02.632392 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:07:02.635611 | orchestrator | 2025-05-14 02:07:02.635932 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-14 02:07:02.636715 | orchestrator | 2025-05-14 02:07:02.637084 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-14 02:07:02.639687 | orchestrator | Wednesday 14 May 2025 02:07:02 +0000 (0:00:00.780) 0:00:04.184 ********* 2025-05-14 02:07:02.700175 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:07:02.729568 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:07:02.749292 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:07:02.803363 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:02.804239 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:02.805137 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:02.809395 | orchestrator | 2025-05-14 02:07:02.810122 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-14 02:07:02.810757 | orchestrator | Wednesday 14 May 2025 02:07:02 +0000 (0:00:00.181) 0:00:04.366 ********* 2025-05-14 02:07:02.868670 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:07:02.884168 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:07:02.905654 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:07:02.940007 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:02.940130 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:02.940379 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:02.940732 | orchestrator | 2025-05-14 02:07:02.941082 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-14 02:07:02.942272 | orchestrator | Wednesday 14 May 2025 02:07:02 +0000 (0:00:00.136) 0:00:04.503 ********* 2025-05-14 02:07:03.562926 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:03.564330 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:07:03.564399 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:07:03.566326 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:03.567315 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:07:03.569073 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:03.569495 | orchestrator | 2025-05-14 02:07:03.570325 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-14 02:07:03.571416 | orchestrator | Wednesday 14 May 2025 02:07:03 +0000 (0:00:00.621) 0:00:05.125 ********* 2025-05-14 02:07:04.322527 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:07:04.323090 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:07:04.324488 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:04.325216 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:07:04.327947 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:04.328138 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:04.328763 | orchestrator | 2025-05-14 02:07:04.331826 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-14 02:07:04.331910 | orchestrator | Wednesday 14 May 2025 02:07:04 +0000 (0:00:00.756) 0:00:05.881 ********* 2025-05-14 02:07:05.467308 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-14 02:07:05.467448 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-14 02:07:05.467463 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-14 02:07:05.468422 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-14 02:07:05.468631 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-14 02:07:05.469015 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-14 02:07:05.470012 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-14 02:07:05.473124 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-14 02:07:05.473152 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-14 02:07:05.473966 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-14 02:07:05.474803 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-14 02:07:05.475805 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-14 02:07:05.476155 | orchestrator | 2025-05-14 02:07:05.476595 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-14 02:07:05.477022 | orchestrator | Wednesday 14 May 2025 02:07:05 +0000 (0:00:01.145) 0:00:07.026 ********* 2025-05-14 02:07:06.655387 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:06.655576 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:06.658792 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:07:06.658879 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:07:06.658943 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:06.659205 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:07:06.659508 | orchestrator | 2025-05-14 02:07:06.659893 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-14 02:07:06.660129 | orchestrator | Wednesday 14 May 2025 02:07:06 +0000 (0:00:01.188) 0:00:08.215 ********* 2025-05-14 02:07:07.837554 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-14 02:07:07.839046 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-14 02:07:07.839909 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-14 02:07:07.868009 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:07:07.869346 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:07:07.872070 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:07:07.872093 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:07:07.872105 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:07:07.872117 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:07:07.872545 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-14 02:07:07.873212 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-14 02:07:07.873643 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-14 02:07:07.874288 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-14 02:07:07.874688 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-14 02:07:07.875192 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-14 02:07:07.875980 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:07:07.876534 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:07:07.876909 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:07:07.877291 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:07:07.877859 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:07:07.878207 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:07:07.878716 | orchestrator | 2025-05-14 02:07:07.879603 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-14 02:07:07.880003 | orchestrator | Wednesday 14 May 2025 02:07:07 +0000 (0:00:01.215) 0:00:09.431 ********* 2025-05-14 02:07:08.409205 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:07:08.410264 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:08.411806 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:07:08.412372 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:07:08.412558 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:08.413059 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:08.413454 | orchestrator | 2025-05-14 02:07:08.416279 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-14 02:07:08.417548 | orchestrator | Wednesday 14 May 2025 02:07:08 +0000 (0:00:00.538) 0:00:09.969 ********* 2025-05-14 02:07:08.484717 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:07:08.513782 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:07:08.552733 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:07:08.601903 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:07:08.602428 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:07:08.603283 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:07:08.605586 | orchestrator | 2025-05-14 02:07:08.605611 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-14 02:07:08.605625 | orchestrator | Wednesday 14 May 2025 02:07:08 +0000 (0:00:00.194) 0:00:10.164 ********* 2025-05-14 02:07:09.303862 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:07:09.303944 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:09.304168 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:07:09.304855 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:07:09.305568 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:07:09.305776 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:09.306420 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-14 02:07:09.307258 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:07:09.307275 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-14 02:07:09.307572 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:07:09.308181 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:07:09.308462 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:09.309047 | orchestrator | 2025-05-14 02:07:09.309403 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-14 02:07:09.312171 | orchestrator | Wednesday 14 May 2025 02:07:09 +0000 (0:00:00.700) 0:00:10.864 ********* 2025-05-14 02:07:09.343390 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:07:09.359530 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:07:09.378184 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:07:09.428976 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:07:09.429038 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:07:09.429197 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:07:09.429538 | orchestrator | 2025-05-14 02:07:09.429940 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-14 02:07:09.430092 | orchestrator | Wednesday 14 May 2025 02:07:09 +0000 (0:00:00.126) 0:00:10.991 ********* 2025-05-14 02:07:09.467208 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:07:09.504136 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:07:09.521082 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:07:09.555699 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:07:09.556238 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:07:09.557671 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:07:09.557940 | orchestrator | 2025-05-14 02:07:09.559457 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-14 02:07:09.560607 | orchestrator | Wednesday 14 May 2025 02:07:09 +0000 (0:00:00.125) 0:00:11.116 ********* 2025-05-14 02:07:09.614617 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:07:09.634930 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:07:09.653990 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:07:09.678931 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:07:09.680159 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:07:09.681518 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:07:09.683128 | orchestrator | 2025-05-14 02:07:09.684026 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-14 02:07:09.685851 | orchestrator | Wednesday 14 May 2025 02:07:09 +0000 (0:00:00.124) 0:00:11.241 ********* 2025-05-14 02:07:10.346278 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:07:10.347842 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:10.349633 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:07:10.351022 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:07:10.352239 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:10.352956 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:10.354098 | orchestrator | 2025-05-14 02:07:10.354920 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-14 02:07:10.355906 | orchestrator | Wednesday 14 May 2025 02:07:10 +0000 (0:00:00.664) 0:00:11.906 ********* 2025-05-14 02:07:10.462399 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:07:10.501029 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:07:10.640333 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:07:10.641195 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:07:10.642085 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:07:10.643020 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:07:10.644001 | orchestrator | 2025-05-14 02:07:10.644291 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:07:10.645191 | orchestrator | 2025-05-14 02:07:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:07:10.645793 | orchestrator | 2025-05-14 02:07:10 | INFO  | Please wait and do not abort execution. 2025-05-14 02:07:10.646737 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:07:10.647535 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:07:10.648029 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:07:10.648272 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:07:10.648698 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:07:10.649213 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:07:10.649459 | orchestrator | 2025-05-14 02:07:10.649890 | orchestrator | Wednesday 14 May 2025 02:07:10 +0000 (0:00:00.296) 0:00:12.202 ********* 2025-05-14 02:07:10.650276 | orchestrator | =============================================================================== 2025-05-14 02:07:10.651026 | orchestrator | Gathering Facts --------------------------------------------------------- 3.32s 2025-05-14 02:07:10.651986 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2025-05-14 02:07:10.652955 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.19s 2025-05-14 02:07:10.653594 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-05-14 02:07:10.654392 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2025-05-14 02:07:10.655285 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.76s 2025-05-14 02:07:10.655600 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-05-14 02:07:10.656055 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-05-14 02:07:10.656161 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2025-05-14 02:07:10.657150 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2025-05-14 02:07:10.657174 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.30s 2025-05-14 02:07:10.657417 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-05-14 02:07:10.658113 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-05-14 02:07:10.658781 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2025-05-14 02:07:10.659555 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2025-05-14 02:07:10.660007 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2025-05-14 02:07:10.660628 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.12s 2025-05-14 02:07:11.089181 | orchestrator | + osism apply --environment custom facts 2025-05-14 02:07:12.484880 | orchestrator | 2025-05-14 02:07:12 | INFO  | Trying to run play facts in environment custom 2025-05-14 02:07:12.535206 | orchestrator | 2025-05-14 02:07:12 | INFO  | Task 3460b3a0-4b9d-4adf-9a8e-69b729833b47 (facts) was prepared for execution. 2025-05-14 02:07:12.535298 | orchestrator | 2025-05-14 02:07:12 | INFO  | It takes a moment until task 3460b3a0-4b9d-4adf-9a8e-69b729833b47 (facts) has been started and output is visible here. 2025-05-14 02:07:15.622316 | orchestrator | 2025-05-14 02:07:15.622621 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-14 02:07:15.624885 | orchestrator | 2025-05-14 02:07:15.625264 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-14 02:07:15.626313 | orchestrator | Wednesday 14 May 2025 02:07:15 +0000 (0:00:00.083) 0:00:00.083 ********* 2025-05-14 02:07:16.880644 | orchestrator | ok: [testbed-manager] 2025-05-14 02:07:17.971867 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:07:17.972675 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:17.974298 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:17.975867 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:17.977701 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:07:17.979223 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:07:17.980435 | orchestrator | 2025-05-14 02:07:17.981408 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-14 02:07:17.982373 | orchestrator | Wednesday 14 May 2025 02:07:17 +0000 (0:00:02.353) 0:00:02.437 ********* 2025-05-14 02:07:19.094462 | orchestrator | ok: [testbed-manager] 2025-05-14 02:07:19.971714 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:19.974801 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:19.974837 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:19.975300 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:07:19.976001 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:07:19.976611 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:07:19.977207 | orchestrator | 2025-05-14 02:07:19.977956 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-14 02:07:19.978397 | orchestrator | 2025-05-14 02:07:19.980177 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-14 02:07:19.980294 | orchestrator | Wednesday 14 May 2025 02:07:19 +0000 (0:00:01.998) 0:00:04.435 ********* 2025-05-14 02:07:20.118153 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:20.118312 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:20.118990 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:20.119321 | orchestrator | 2025-05-14 02:07:20.120076 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-14 02:07:20.122995 | orchestrator | Wednesday 14 May 2025 02:07:20 +0000 (0:00:00.147) 0:00:04.583 ********* 2025-05-14 02:07:20.246628 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:20.247009 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:20.247747 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:20.248705 | orchestrator | 2025-05-14 02:07:20.250107 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-14 02:07:20.250521 | orchestrator | Wednesday 14 May 2025 02:07:20 +0000 (0:00:00.129) 0:00:04.712 ********* 2025-05-14 02:07:20.376557 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:20.376652 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:20.376665 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:20.376676 | orchestrator | 2025-05-14 02:07:20.376689 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-14 02:07:20.376701 | orchestrator | Wednesday 14 May 2025 02:07:20 +0000 (0:00:00.126) 0:00:04.839 ********* 2025-05-14 02:07:20.511430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:07:20.512004 | orchestrator | 2025-05-14 02:07:20.513252 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-14 02:07:20.514429 | orchestrator | Wednesday 14 May 2025 02:07:20 +0000 (0:00:00.137) 0:00:04.976 ********* 2025-05-14 02:07:20.953634 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:20.954366 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:20.955607 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:20.956982 | orchestrator | 2025-05-14 02:07:20.957474 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-14 02:07:20.959473 | orchestrator | Wednesday 14 May 2025 02:07:20 +0000 (0:00:00.438) 0:00:05.414 ********* 2025-05-14 02:07:21.069402 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:07:21.071215 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:07:21.072362 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:07:21.073914 | orchestrator | 2025-05-14 02:07:21.076553 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-14 02:07:21.077967 | orchestrator | Wednesday 14 May 2025 02:07:21 +0000 (0:00:00.118) 0:00:05.533 ********* 2025-05-14 02:07:22.033171 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:22.033826 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:22.035956 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:22.037352 | orchestrator | 2025-05-14 02:07:22.040696 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-14 02:07:22.040727 | orchestrator | Wednesday 14 May 2025 02:07:22 +0000 (0:00:00.963) 0:00:06.496 ********* 2025-05-14 02:07:22.507648 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:22.508409 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:22.508535 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:22.509055 | orchestrator | 2025-05-14 02:07:22.511434 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-14 02:07:22.512007 | orchestrator | Wednesday 14 May 2025 02:07:22 +0000 (0:00:00.475) 0:00:06.971 ********* 2025-05-14 02:07:23.515819 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:23.515924 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:23.515939 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:23.516015 | orchestrator | 2025-05-14 02:07:23.516307 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-14 02:07:23.518733 | orchestrator | Wednesday 14 May 2025 02:07:23 +0000 (0:00:01.007) 0:00:07.979 ********* 2025-05-14 02:07:35.917901 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:35.918006 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:35.918071 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:35.918085 | orchestrator | 2025-05-14 02:07:35.918098 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-14 02:07:35.918110 | orchestrator | Wednesday 14 May 2025 02:07:35 +0000 (0:00:12.394) 0:00:20.374 ********* 2025-05-14 02:07:35.973734 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:07:36.021732 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:07:36.022173 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:07:36.022701 | orchestrator | 2025-05-14 02:07:36.023961 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-14 02:07:36.027238 | orchestrator | Wednesday 14 May 2025 02:07:36 +0000 (0:00:00.112) 0:00:20.486 ********* 2025-05-14 02:07:42.564744 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:07:42.564852 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:07:42.565947 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:07:42.567372 | orchestrator | 2025-05-14 02:07:42.568699 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-14 02:07:42.569448 | orchestrator | Wednesday 14 May 2025 02:07:42 +0000 (0:00:06.541) 0:00:27.028 ********* 2025-05-14 02:07:42.968110 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:42.969029 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:42.969182 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:42.969819 | orchestrator | 2025-05-14 02:07:42.970559 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-14 02:07:42.970701 | orchestrator | Wednesday 14 May 2025 02:07:42 +0000 (0:00:00.405) 0:00:27.433 ********* 2025-05-14 02:07:46.360839 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-14 02:07:46.362390 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-14 02:07:46.364006 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-14 02:07:46.364933 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-14 02:07:46.365626 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-14 02:07:46.367064 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-14 02:07:46.369374 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-14 02:07:46.369464 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-14 02:07:46.370765 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-14 02:07:46.372041 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-14 02:07:46.372709 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-14 02:07:46.373166 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-14 02:07:46.373964 | orchestrator | 2025-05-14 02:07:46.375956 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-14 02:07:46.376471 | orchestrator | Wednesday 14 May 2025 02:07:46 +0000 (0:00:03.390) 0:00:30.823 ********* 2025-05-14 02:07:47.407071 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:47.407771 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:47.409796 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:47.410616 | orchestrator | 2025-05-14 02:07:47.411285 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 02:07:47.411863 | orchestrator | 2025-05-14 02:07:47.412454 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:07:47.413123 | orchestrator | Wednesday 14 May 2025 02:07:47 +0000 (0:00:01.046) 0:00:31.869 ********* 2025-05-14 02:07:49.125124 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:07:52.412801 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:07:52.412884 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:07:52.412949 | orchestrator | ok: [testbed-manager] 2025-05-14 02:07:52.414005 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:52.415085 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:52.415552 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:52.416294 | orchestrator | 2025-05-14 02:07:52.417130 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:07:52.417748 | orchestrator | 2025-05-14 02:07:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:07:52.417770 | orchestrator | 2025-05-14 02:07:52 | INFO  | Please wait and do not abort execution. 2025-05-14 02:07:52.418739 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:07:52.419078 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:07:52.419101 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:07:52.419984 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:07:52.420078 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:07:52.420331 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:07:52.420890 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:07:52.421239 | orchestrator | 2025-05-14 02:07:52.422122 | orchestrator | Wednesday 14 May 2025 02:07:52 +0000 (0:00:05.005) 0:00:36.875 ********* 2025-05-14 02:07:52.422250 | orchestrator | =============================================================================== 2025-05-14 02:07:52.422828 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.39s 2025-05-14 02:07:52.423800 | orchestrator | Install required packages (Debian) -------------------------------------- 6.54s 2025-05-14 02:07:52.424494 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.01s 2025-05-14 02:07:52.424839 | orchestrator | Copy fact files --------------------------------------------------------- 3.39s 2025-05-14 02:07:52.425265 | orchestrator | Create custom facts directory ------------------------------------------- 2.35s 2025-05-14 02:07:52.425661 | orchestrator | Copy fact file ---------------------------------------------------------- 2.00s 2025-05-14 02:07:52.426338 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.05s 2025-05-14 02:07:52.426747 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.01s 2025-05-14 02:07:52.427250 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.96s 2025-05-14 02:07:52.427726 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-05-14 02:07:52.428125 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-05-14 02:07:52.428618 | orchestrator | Create custom facts directory ------------------------------------------- 0.41s 2025-05-14 02:07:52.428989 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.15s 2025-05-14 02:07:52.429455 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-05-14 02:07:52.429996 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.13s 2025-05-14 02:07:52.430388 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.13s 2025-05-14 02:07:52.430859 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-05-14 02:07:52.431205 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-05-14 02:07:52.857534 | orchestrator | + osism apply bootstrap 2025-05-14 02:07:54.295764 | orchestrator | 2025-05-14 02:07:54 | INFO  | Task d0a7f1ac-6a8b-4e19-a348-ddb13cc2b49d (bootstrap) was prepared for execution. 2025-05-14 02:07:54.295858 | orchestrator | 2025-05-14 02:07:54 | INFO  | It takes a moment until task d0a7f1ac-6a8b-4e19-a348-ddb13cc2b49d (bootstrap) has been started and output is visible here. 2025-05-14 02:07:57.473685 | orchestrator | 2025-05-14 02:07:57.474578 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-14 02:07:57.475074 | orchestrator | 2025-05-14 02:07:57.476763 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-14 02:07:57.477374 | orchestrator | Wednesday 14 May 2025 02:07:57 +0000 (0:00:00.105) 0:00:00.105 ********* 2025-05-14 02:07:57.585416 | orchestrator | ok: [testbed-manager] 2025-05-14 02:07:57.619891 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:07:57.646491 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:07:57.742365 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:07:57.742504 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:07:57.746695 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:07:57.747273 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:07:57.748285 | orchestrator | 2025-05-14 02:07:57.749048 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 02:07:57.749679 | orchestrator | 2025-05-14 02:07:57.750538 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:07:57.751402 | orchestrator | Wednesday 14 May 2025 02:07:57 +0000 (0:00:00.271) 0:00:00.377 ********* 2025-05-14 02:08:01.798485 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:01.799275 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:01.802197 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:01.802226 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:01.802237 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:01.802249 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:01.803214 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:01.803315 | orchestrator | 2025-05-14 02:08:01.804049 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-14 02:08:01.804535 | orchestrator | 2025-05-14 02:08:01.805057 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:08:01.805335 | orchestrator | Wednesday 14 May 2025 02:08:01 +0000 (0:00:04.055) 0:00:04.433 ********* 2025-05-14 02:08:01.891136 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-14 02:08:01.940855 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-14 02:08:01.940970 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-14 02:08:01.941022 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:08:01.941278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-14 02:08:02.259377 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:08:02.260022 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-14 02:08:02.263252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:08:02.264054 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:08:02.265326 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-14 02:08:02.266110 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-14 02:08:02.266744 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:08:02.267575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-14 02:08:02.268106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:08:02.269010 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:08:02.269649 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:08:02.269916 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-14 02:08:02.270907 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-14 02:08:02.271446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:08:02.272453 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:08:02.275872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:08:02.276747 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:02.277100 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:08:02.278467 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-14 02:08:02.279222 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-14 02:08:02.280049 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:08:02.280945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:08:02.281701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:08:02.282091 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:08:02.282545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:08:02.283438 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-14 02:08:02.283665 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:08:02.284342 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:08:02.285412 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:08:02.285721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:08:02.286128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:08:02.286590 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:08:02.289921 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:08:02.290590 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:08:02.290676 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:08:02.291123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:08:02.291536 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:02.291637 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 02:08:02.292324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:08:02.292508 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:08:02.292753 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 02:08:02.293048 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 02:08:02.293434 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:08:02.293769 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:02.293894 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 02:08:02.294088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:08:02.294495 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:02.294913 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:02.295149 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 02:08:02.295595 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 02:08:02.295666 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:02.297171 | orchestrator | 2025-05-14 02:08:02.297196 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-14 02:08:02.297210 | orchestrator | 2025-05-14 02:08:02.297814 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-14 02:08:02.297989 | orchestrator | Wednesday 14 May 2025 02:08:02 +0000 (0:00:00.460) 0:00:04.893 ********* 2025-05-14 02:08:02.341112 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:02.368322 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:02.391146 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:02.419886 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:02.480599 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:02.480783 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:02.481299 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:02.481722 | orchestrator | 2025-05-14 02:08:02.482494 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-14 02:08:02.482611 | orchestrator | Wednesday 14 May 2025 02:08:02 +0000 (0:00:00.222) 0:00:05.116 ********* 2025-05-14 02:08:03.755821 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:03.756498 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:03.757787 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:03.760152 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:03.760191 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:03.760243 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:03.761436 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:03.762883 | orchestrator | 2025-05-14 02:08:03.763625 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-14 02:08:03.764885 | orchestrator | Wednesday 14 May 2025 02:08:03 +0000 (0:00:01.274) 0:00:06.390 ********* 2025-05-14 02:08:05.065783 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:05.066329 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:05.067269 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:05.067335 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:05.068351 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:05.068723 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:05.069466 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:05.070071 | orchestrator | 2025-05-14 02:08:05.070544 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-14 02:08:05.071129 | orchestrator | Wednesday 14 May 2025 02:08:05 +0000 (0:00:01.309) 0:00:07.700 ********* 2025-05-14 02:08:05.310191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:08:05.311103 | orchestrator | 2025-05-14 02:08:05.312412 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-14 02:08:05.314957 | orchestrator | Wednesday 14 May 2025 02:08:05 +0000 (0:00:00.245) 0:00:07.945 ********* 2025-05-14 02:08:07.238838 | orchestrator | changed: [testbed-manager] 2025-05-14 02:08:07.238918 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:07.238932 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:07.239433 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:07.239563 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:07.239643 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:07.240292 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:07.240609 | orchestrator | 2025-05-14 02:08:07.241479 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-14 02:08:07.241618 | orchestrator | Wednesday 14 May 2025 02:08:07 +0000 (0:00:01.924) 0:00:09.869 ********* 2025-05-14 02:08:07.310083 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:08:07.463242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:08:07.463478 | orchestrator | 2025-05-14 02:08:07.466302 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-14 02:08:07.467239 | orchestrator | Wednesday 14 May 2025 02:08:07 +0000 (0:00:00.227) 0:00:10.097 ********* 2025-05-14 02:08:08.397642 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:08.397851 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:08.398684 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:08.400057 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:08.400087 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:08.400099 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:08.400161 | orchestrator | 2025-05-14 02:08:08.400800 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-14 02:08:08.402300 | orchestrator | Wednesday 14 May 2025 02:08:08 +0000 (0:00:00.932) 0:00:11.030 ********* 2025-05-14 02:08:08.490280 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:08:09.013168 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:09.013267 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:09.013281 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:09.013293 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:09.013305 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:09.013377 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:09.014293 | orchestrator | 2025-05-14 02:08:09.014324 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-14 02:08:09.014341 | orchestrator | Wednesday 14 May 2025 02:08:09 +0000 (0:00:00.617) 0:00:11.647 ********* 2025-05-14 02:08:09.118838 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:09.148337 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:09.177688 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:09.446980 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:09.447185 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:09.448349 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:09.449367 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:09.451461 | orchestrator | 2025-05-14 02:08:09.452575 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-14 02:08:09.453804 | orchestrator | Wednesday 14 May 2025 02:08:09 +0000 (0:00:00.433) 0:00:12.081 ********* 2025-05-14 02:08:09.520202 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:08:09.543179 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:09.564517 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:09.593346 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:09.646185 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:09.647267 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:09.648781 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:09.649350 | orchestrator | 2025-05-14 02:08:09.650593 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-14 02:08:09.651327 | orchestrator | Wednesday 14 May 2025 02:08:09 +0000 (0:00:00.200) 0:00:12.281 ********* 2025-05-14 02:08:09.933067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:08:09.935093 | orchestrator | 2025-05-14 02:08:09.936744 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-14 02:08:09.937718 | orchestrator | Wednesday 14 May 2025 02:08:09 +0000 (0:00:00.285) 0:00:12.566 ********* 2025-05-14 02:08:10.227444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:08:10.228459 | orchestrator | 2025-05-14 02:08:10.229558 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-14 02:08:10.230965 | orchestrator | Wednesday 14 May 2025 02:08:10 +0000 (0:00:00.294) 0:00:12.861 ********* 2025-05-14 02:08:11.454502 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:11.456192 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:11.456246 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:11.457556 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:11.458393 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:11.458938 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:11.459733 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:11.459885 | orchestrator | 2025-05-14 02:08:11.460286 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-14 02:08:11.461314 | orchestrator | Wednesday 14 May 2025 02:08:11 +0000 (0:00:01.226) 0:00:14.087 ********* 2025-05-14 02:08:11.525366 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:08:11.564648 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:11.588411 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:11.624959 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:11.685425 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:11.685660 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:11.686278 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:11.686646 | orchestrator | 2025-05-14 02:08:11.687071 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-14 02:08:11.687728 | orchestrator | Wednesday 14 May 2025 02:08:11 +0000 (0:00:00.233) 0:00:14.321 ********* 2025-05-14 02:08:12.204184 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:12.204272 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:12.207123 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:12.207150 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:12.207162 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:12.208019 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:12.208937 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:12.209814 | orchestrator | 2025-05-14 02:08:12.210684 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-14 02:08:12.210992 | orchestrator | Wednesday 14 May 2025 02:08:12 +0000 (0:00:00.516) 0:00:14.837 ********* 2025-05-14 02:08:12.286381 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:08:12.312308 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:12.339234 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:12.362609 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:12.454505 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:12.454709 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:12.457320 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:12.457346 | orchestrator | 2025-05-14 02:08:12.457768 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-14 02:08:12.458150 | orchestrator | Wednesday 14 May 2025 02:08:12 +0000 (0:00:00.250) 0:00:15.088 ********* 2025-05-14 02:08:12.985417 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:12.985513 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:12.985680 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:12.985701 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:12.986082 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:12.986706 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:12.986895 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:12.987314 | orchestrator | 2025-05-14 02:08:12.987723 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-14 02:08:12.988198 | orchestrator | Wednesday 14 May 2025 02:08:12 +0000 (0:00:00.525) 0:00:15.614 ********* 2025-05-14 02:08:14.139466 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:14.141824 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:14.141856 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:14.141868 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:14.143262 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:14.144753 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:14.145598 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:14.146205 | orchestrator | 2025-05-14 02:08:14.147458 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-14 02:08:14.147931 | orchestrator | Wednesday 14 May 2025 02:08:14 +0000 (0:00:01.157) 0:00:16.771 ********* 2025-05-14 02:08:15.323708 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:15.324232 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:15.327745 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:15.327774 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:15.327786 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:15.327797 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:15.327809 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:15.328613 | orchestrator | 2025-05-14 02:08:15.329560 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-14 02:08:15.329886 | orchestrator | Wednesday 14 May 2025 02:08:15 +0000 (0:00:01.180) 0:00:17.952 ********* 2025-05-14 02:08:15.642114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:08:15.643398 | orchestrator | 2025-05-14 02:08:15.643446 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-14 02:08:15.644421 | orchestrator | Wednesday 14 May 2025 02:08:15 +0000 (0:00:00.322) 0:00:18.275 ********* 2025-05-14 02:08:15.718326 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:08:17.246284 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:17.249855 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:17.249894 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:17.249903 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:17.249910 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:17.249917 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:17.250054 | orchestrator | 2025-05-14 02:08:17.250519 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-14 02:08:17.251460 | orchestrator | Wednesday 14 May 2025 02:08:17 +0000 (0:00:01.601) 0:00:19.876 ********* 2025-05-14 02:08:17.317015 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:17.352958 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:17.376336 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:17.404455 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:17.469463 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:17.470594 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:17.471677 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:17.472459 | orchestrator | 2025-05-14 02:08:17.473713 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-14 02:08:17.473753 | orchestrator | Wednesday 14 May 2025 02:08:17 +0000 (0:00:00.227) 0:00:20.104 ********* 2025-05-14 02:08:17.581482 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:17.606689 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:17.632139 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:17.695059 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:17.698164 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:17.699471 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:17.699761 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:17.701406 | orchestrator | 2025-05-14 02:08:17.702582 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-14 02:08:17.704100 | orchestrator | Wednesday 14 May 2025 02:08:17 +0000 (0:00:00.224) 0:00:20.328 ********* 2025-05-14 02:08:17.769684 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:17.802308 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:17.829103 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:17.858616 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:17.937515 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:17.938161 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:17.938647 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:17.939676 | orchestrator | 2025-05-14 02:08:17.940430 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-14 02:08:17.941413 | orchestrator | Wednesday 14 May 2025 02:08:17 +0000 (0:00:00.238) 0:00:20.567 ********* 2025-05-14 02:08:18.281391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:08:18.282260 | orchestrator | 2025-05-14 02:08:18.283671 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-14 02:08:18.285007 | orchestrator | Wednesday 14 May 2025 02:08:18 +0000 (0:00:00.347) 0:00:20.915 ********* 2025-05-14 02:08:18.821678 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:18.821847 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:18.822135 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:18.823172 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:18.824177 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:18.824620 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:18.825525 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:18.826276 | orchestrator | 2025-05-14 02:08:18.827335 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-14 02:08:18.828293 | orchestrator | Wednesday 14 May 2025 02:08:18 +0000 (0:00:00.537) 0:00:21.453 ********* 2025-05-14 02:08:18.897274 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:08:18.926256 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:18.949725 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:18.978459 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:19.069200 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:19.069285 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:19.070255 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:19.076679 | orchestrator | 2025-05-14 02:08:19.076774 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-14 02:08:19.076791 | orchestrator | Wednesday 14 May 2025 02:08:19 +0000 (0:00:00.248) 0:00:21.702 ********* 2025-05-14 02:08:20.113516 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:20.113665 | orchestrator | changed: [testbed-manager] 2025-05-14 02:08:20.114501 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:20.117685 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:20.117708 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:20.118003 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:20.119035 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:20.120075 | orchestrator | 2025-05-14 02:08:20.121630 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-14 02:08:20.122735 | orchestrator | Wednesday 14 May 2025 02:08:20 +0000 (0:00:01.042) 0:00:22.745 ********* 2025-05-14 02:08:20.689455 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:20.690242 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:20.690687 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:20.691744 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:20.692698 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:20.693142 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:20.693690 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:20.693958 | orchestrator | 2025-05-14 02:08:20.695701 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-14 02:08:20.696525 | orchestrator | Wednesday 14 May 2025 02:08:20 +0000 (0:00:00.575) 0:00:23.321 ********* 2025-05-14 02:08:21.804013 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:21.806081 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:21.806134 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:21.807323 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:21.808379 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:21.810395 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:21.810463 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:21.810593 | orchestrator | 2025-05-14 02:08:21.811960 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-14 02:08:21.812005 | orchestrator | Wednesday 14 May 2025 02:08:21 +0000 (0:00:01.114) 0:00:24.435 ********* 2025-05-14 02:08:35.250159 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:35.250349 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:35.250368 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:35.250381 | orchestrator | changed: [testbed-manager] 2025-05-14 02:08:35.250471 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:35.251367 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:35.252540 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:35.253425 | orchestrator | 2025-05-14 02:08:35.255248 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-14 02:08:35.255308 | orchestrator | Wednesday 14 May 2025 02:08:35 +0000 (0:00:13.442) 0:00:37.878 ********* 2025-05-14 02:08:35.315519 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:35.344735 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:35.377067 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:35.403147 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:35.455819 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:35.456646 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:35.457929 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:35.460504 | orchestrator | 2025-05-14 02:08:35.460534 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-14 02:08:35.460570 | orchestrator | Wednesday 14 May 2025 02:08:35 +0000 (0:00:00.213) 0:00:38.091 ********* 2025-05-14 02:08:35.535287 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:35.558795 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:35.587398 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:35.614451 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:35.692117 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:35.692997 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:35.693816 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:35.695034 | orchestrator | 2025-05-14 02:08:35.695330 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-14 02:08:35.696059 | orchestrator | Wednesday 14 May 2025 02:08:35 +0000 (0:00:00.234) 0:00:38.325 ********* 2025-05-14 02:08:35.775253 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:35.806288 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:35.830325 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:35.854125 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:35.933273 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:35.933529 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:35.934176 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:35.934675 | orchestrator | 2025-05-14 02:08:35.935144 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-14 02:08:35.936311 | orchestrator | Wednesday 14 May 2025 02:08:35 +0000 (0:00:00.242) 0:00:38.568 ********* 2025-05-14 02:08:36.216002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:08:36.216207 | orchestrator | 2025-05-14 02:08:36.217657 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-14 02:08:36.219175 | orchestrator | Wednesday 14 May 2025 02:08:36 +0000 (0:00:00.279) 0:00:38.848 ********* 2025-05-14 02:08:37.898290 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:37.898608 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:37.900910 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:37.902190 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:37.903998 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:37.905212 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:37.906663 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:37.907805 | orchestrator | 2025-05-14 02:08:37.908794 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-14 02:08:37.909230 | orchestrator | Wednesday 14 May 2025 02:08:37 +0000 (0:00:01.681) 0:00:40.530 ********* 2025-05-14 02:08:39.012768 | orchestrator | changed: [testbed-manager] 2025-05-14 02:08:39.013680 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:39.017432 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:39.017469 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:39.017481 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:39.018383 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:39.020172 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:39.020196 | orchestrator | 2025-05-14 02:08:39.020210 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-14 02:08:39.020223 | orchestrator | Wednesday 14 May 2025 02:08:38 +0000 (0:00:01.109) 0:00:41.639 ********* 2025-05-14 02:08:39.923616 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:39.924192 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:39.924540 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:39.925591 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:39.926706 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:39.927267 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:39.927764 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:39.928267 | orchestrator | 2025-05-14 02:08:39.929152 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-14 02:08:39.929258 | orchestrator | Wednesday 14 May 2025 02:08:39 +0000 (0:00:00.918) 0:00:42.557 ********* 2025-05-14 02:08:40.236836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:08:40.238193 | orchestrator | 2025-05-14 02:08:40.238650 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-14 02:08:40.239832 | orchestrator | Wednesday 14 May 2025 02:08:40 +0000 (0:00:00.312) 0:00:42.870 ********* 2025-05-14 02:08:41.293682 | orchestrator | changed: [testbed-manager] 2025-05-14 02:08:41.293797 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:41.294184 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:41.294624 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:41.295413 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:41.296251 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:41.297031 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:41.297348 | orchestrator | 2025-05-14 02:08:41.298603 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-14 02:08:41.298801 | orchestrator | Wednesday 14 May 2025 02:08:41 +0000 (0:00:01.053) 0:00:43.923 ********* 2025-05-14 02:08:41.372458 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:08:41.394188 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:41.409770 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:41.543071 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:41.544005 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:41.545102 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:41.546427 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:41.549042 | orchestrator | 2025-05-14 02:08:41.549065 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-14 02:08:41.550685 | orchestrator | Wednesday 14 May 2025 02:08:41 +0000 (0:00:00.254) 0:00:44.178 ********* 2025-05-14 02:08:52.691361 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:52.691466 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:52.691481 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:52.691492 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:52.694275 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:52.694315 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:52.694654 | orchestrator | changed: [testbed-manager] 2025-05-14 02:08:52.695089 | orchestrator | 2025-05-14 02:08:52.695611 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-14 02:08:52.696091 | orchestrator | Wednesday 14 May 2025 02:08:52 +0000 (0:00:11.144) 0:00:55.322 ********* 2025-05-14 02:08:53.680714 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:53.680924 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:53.682852 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:53.683384 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:53.686711 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:53.687939 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:53.692951 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:53.692978 | orchestrator | 2025-05-14 02:08:53.695404 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-14 02:08:53.699006 | orchestrator | Wednesday 14 May 2025 02:08:53 +0000 (0:00:00.993) 0:00:56.316 ********* 2025-05-14 02:08:54.576302 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:54.577325 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:54.577465 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:54.577773 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:54.578715 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:54.580738 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:54.581073 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:54.581859 | orchestrator | 2025-05-14 02:08:54.581957 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-14 02:08:54.582219 | orchestrator | Wednesday 14 May 2025 02:08:54 +0000 (0:00:00.893) 0:00:57.209 ********* 2025-05-14 02:08:54.667156 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:54.693779 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:54.720903 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:54.749369 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:54.799258 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:54.799342 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:54.799427 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:54.799968 | orchestrator | 2025-05-14 02:08:54.799992 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-14 02:08:54.800147 | orchestrator | Wednesday 14 May 2025 02:08:54 +0000 (0:00:00.225) 0:00:57.434 ********* 2025-05-14 02:08:54.874493 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:54.898723 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:54.925846 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:54.950628 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:55.038188 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:55.038285 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:55.038380 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:55.038492 | orchestrator | 2025-05-14 02:08:55.038914 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-14 02:08:55.041715 | orchestrator | Wednesday 14 May 2025 02:08:55 +0000 (0:00:00.238) 0:00:57.673 ********* 2025-05-14 02:08:55.356786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:08:55.357196 | orchestrator | 2025-05-14 02:08:55.357998 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-14 02:08:55.361471 | orchestrator | Wednesday 14 May 2025 02:08:55 +0000 (0:00:00.317) 0:00:57.990 ********* 2025-05-14 02:08:57.069520 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:57.069766 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:57.070616 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:57.071439 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:57.072256 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:57.073175 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:57.073458 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:57.074238 | orchestrator | 2025-05-14 02:08:57.075562 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-14 02:08:57.076423 | orchestrator | Wednesday 14 May 2025 02:08:57 +0000 (0:00:01.711) 0:00:59.702 ********* 2025-05-14 02:08:57.618610 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:57.621197 | orchestrator | changed: [testbed-manager] 2025-05-14 02:08:57.621230 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:57.621242 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:57.621796 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:57.622442 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:57.623243 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:57.623695 | orchestrator | 2025-05-14 02:08:57.624426 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-14 02:08:57.624934 | orchestrator | Wednesday 14 May 2025 02:08:57 +0000 (0:00:00.550) 0:01:00.252 ********* 2025-05-14 02:08:57.706080 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:57.741362 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:57.773909 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:57.800253 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:57.875189 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:57.875354 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:57.876413 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:57.877249 | orchestrator | 2025-05-14 02:08:57.879384 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-14 02:08:57.879409 | orchestrator | Wednesday 14 May 2025 02:08:57 +0000 (0:00:00.257) 0:01:00.509 ********* 2025-05-14 02:08:59.031936 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:59.032501 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:59.032830 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:59.034509 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:59.034733 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:59.035128 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:59.035537 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:59.035940 | orchestrator | 2025-05-14 02:08:59.036325 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-14 02:08:59.037167 | orchestrator | Wednesday 14 May 2025 02:08:59 +0000 (0:00:01.151) 0:01:01.661 ********* 2025-05-14 02:09:00.695513 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:09:00.695941 | orchestrator | changed: [testbed-manager] 2025-05-14 02:09:00.696684 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:09:00.699366 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:00.699432 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:00.700165 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:09:00.701212 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:00.701421 | orchestrator | 2025-05-14 02:09:00.702145 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-14 02:09:00.702723 | orchestrator | Wednesday 14 May 2025 02:09:00 +0000 (0:00:01.667) 0:01:03.329 ********* 2025-05-14 02:09:03.049272 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:03.049765 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:03.051765 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:03.052423 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:03.054554 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:03.055253 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:03.055834 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:03.056766 | orchestrator | 2025-05-14 02:09:03.057516 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-14 02:09:03.058546 | orchestrator | Wednesday 14 May 2025 02:09:03 +0000 (0:00:02.353) 0:01:05.682 ********* 2025-05-14 02:09:41.880440 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:41.881391 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:41.881453 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:41.881466 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:41.881523 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:41.883482 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:41.883588 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:41.884889 | orchestrator | 2025-05-14 02:09:41.886163 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-14 02:09:41.886187 | orchestrator | Wednesday 14 May 2025 02:09:41 +0000 (0:00:38.829) 0:01:44.512 ********* 2025-05-14 02:11:06.439240 | orchestrator | changed: [testbed-manager] 2025-05-14 02:11:06.439361 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:11:06.439376 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:11:06.439387 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:11:06.439397 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:11:06.440994 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:11:06.441397 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:11:06.441753 | orchestrator | 2025-05-14 02:11:06.442142 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-14 02:11:06.442519 | orchestrator | Wednesday 14 May 2025 02:11:06 +0000 (0:01:24.551) 0:03:09.063 ********* 2025-05-14 02:11:08.037830 | orchestrator | ok: [testbed-manager] 2025-05-14 02:11:08.037937 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:11:08.038862 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:11:08.039582 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:11:08.040664 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:11:08.041485 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:11:08.042856 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:11:08.043315 | orchestrator | 2025-05-14 02:11:08.044405 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-14 02:11:08.045106 | orchestrator | Wednesday 14 May 2025 02:11:08 +0000 (0:00:01.608) 0:03:10.672 ********* 2025-05-14 02:11:20.141209 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:11:20.141413 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:11:20.141431 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:11:20.141442 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:11:20.141453 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:11:20.141464 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:11:20.141475 | orchestrator | changed: [testbed-manager] 2025-05-14 02:11:20.141559 | orchestrator | 2025-05-14 02:11:20.141843 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-14 02:11:20.142477 | orchestrator | Wednesday 14 May 2025 02:11:20 +0000 (0:00:12.095) 0:03:22.767 ********* 2025-05-14 02:11:20.502302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-14 02:11:20.502447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-14 02:11:20.505085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-14 02:11:20.505342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-14 02:11:20.505866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-14 02:11:20.506985 | orchestrator | 2025-05-14 02:11:20.507726 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-14 02:11:20.508859 | orchestrator | Wednesday 14 May 2025 02:11:20 +0000 (0:00:00.367) 0:03:23.135 ********* 2025-05-14 02:11:20.562120 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 02:11:20.594281 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:11:20.595207 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 02:11:20.596011 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 02:11:20.628143 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:11:20.665862 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:11:20.666820 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 02:11:20.704788 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:11:21.251366 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:11:21.251544 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:11:21.252458 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:11:21.254180 | orchestrator | 2025-05-14 02:11:21.254388 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-14 02:11:21.255119 | orchestrator | Wednesday 14 May 2025 02:11:21 +0000 (0:00:00.748) 0:03:23.883 ********* 2025-05-14 02:11:21.282864 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 02:11:21.322364 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 02:11:21.324756 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 02:11:21.324801 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 02:11:21.324811 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 02:11:21.324820 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 02:11:21.324875 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 02:11:21.326778 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 02:11:21.326902 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 02:11:21.326982 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 02:11:21.327296 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 02:11:21.327511 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 02:11:21.356521 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:11:21.356680 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 02:11:21.356748 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 02:11:21.357347 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 02:11:21.357460 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 02:11:21.358673 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 02:11:21.358689 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 02:11:21.358807 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 02:11:21.400644 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 02:11:21.401193 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 02:11:21.401217 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 02:11:21.401409 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 02:11:21.401932 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 02:11:21.402955 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 02:11:21.402986 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 02:11:21.403447 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 02:11:21.403844 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 02:11:21.404186 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 02:11:21.404650 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 02:11:21.405049 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 02:11:21.406760 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 02:11:21.430623 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:11:21.431132 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 02:11:21.431158 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 02:11:21.431599 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 02:11:21.431858 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 02:11:21.432250 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 02:11:21.432568 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 02:11:21.432947 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 02:11:21.433356 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 02:11:21.451979 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:11:27.839269 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:11:27.841350 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-14 02:11:27.843332 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-14 02:11:27.844872 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-14 02:11:27.846373 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-14 02:11:27.846420 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-14 02:11:27.850187 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-14 02:11:27.850231 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-14 02:11:27.850243 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-14 02:11:27.850255 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-14 02:11:27.850309 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-14 02:11:27.851203 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-14 02:11:27.852140 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-14 02:11:27.852363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-14 02:11:27.852995 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-14 02:11:27.854630 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-14 02:11:27.855504 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-14 02:11:27.855958 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-14 02:11:27.856518 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-14 02:11:27.857185 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-14 02:11:27.857914 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-14 02:11:27.860205 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-14 02:11:27.860623 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-14 02:11:27.861205 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-14 02:11:27.861906 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-14 02:11:27.862221 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-14 02:11:27.862610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-14 02:11:27.863021 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-14 02:11:27.863376 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-14 02:11:27.863828 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-14 02:11:27.864250 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-14 02:11:27.864647 | orchestrator | 2025-05-14 02:11:27.865093 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-14 02:11:27.865416 | orchestrator | Wednesday 14 May 2025 02:11:27 +0000 (0:00:06.588) 0:03:30.471 ********* 2025-05-14 02:11:28.427001 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:11:28.427568 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:11:28.429039 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:11:28.429086 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:11:28.429786 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:11:28.430225 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:11:28.430939 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:11:28.431431 | orchestrator | 2025-05-14 02:11:28.432127 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-14 02:11:28.432514 | orchestrator | Wednesday 14 May 2025 02:11:28 +0000 (0:00:00.589) 0:03:31.060 ********* 2025-05-14 02:11:28.488203 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 02:11:28.516466 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:11:28.594404 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 02:11:28.966873 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:11:28.967000 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 02:11:28.967072 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:11:28.968420 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 02:11:28.969312 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:11:28.970129 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-14 02:11:28.971333 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-14 02:11:28.972893 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-14 02:11:28.973325 | orchestrator | 2025-05-14 02:11:28.974437 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-14 02:11:28.975143 | orchestrator | Wednesday 14 May 2025 02:11:28 +0000 (0:00:00.538) 0:03:31.599 ********* 2025-05-14 02:11:29.017516 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 02:11:29.045505 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:11:29.119145 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 02:11:29.518389 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:11:29.519954 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 02:11:29.523292 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:11:29.523334 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 02:11:29.523347 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:11:29.523358 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-14 02:11:29.523414 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-14 02:11:29.524079 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-14 02:11:29.525024 | orchestrator | 2025-05-14 02:11:29.525543 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-14 02:11:29.526133 | orchestrator | Wednesday 14 May 2025 02:11:29 +0000 (0:00:00.553) 0:03:32.152 ********* 2025-05-14 02:11:29.599112 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:11:29.628686 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:11:29.653805 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:11:29.676404 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:11:29.798781 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:11:29.799506 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:11:29.803602 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:11:29.803694 | orchestrator | 2025-05-14 02:11:29.803762 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-14 02:11:29.803777 | orchestrator | Wednesday 14 May 2025 02:11:29 +0000 (0:00:00.280) 0:03:32.433 ********* 2025-05-14 02:11:35.733002 | orchestrator | ok: [testbed-manager] 2025-05-14 02:11:35.733462 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:11:35.734469 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:11:35.734506 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:11:35.735265 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:11:35.735777 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:11:35.736279 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:11:35.737003 | orchestrator | 2025-05-14 02:11:35.738000 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-14 02:11:35.738960 | orchestrator | Wednesday 14 May 2025 02:11:35 +0000 (0:00:05.928) 0:03:38.362 ********* 2025-05-14 02:11:35.819733 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-14 02:11:35.819877 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-14 02:11:35.865975 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:11:35.866200 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-14 02:11:35.923317 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:11:35.923521 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-14 02:11:35.963123 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:11:35.963296 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-14 02:11:35.997570 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:11:36.057671 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:11:36.057856 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-14 02:11:36.058560 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:11:36.058928 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-14 02:11:36.059325 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:11:36.060244 | orchestrator | 2025-05-14 02:11:36.060777 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-14 02:11:36.062103 | orchestrator | Wednesday 14 May 2025 02:11:36 +0000 (0:00:00.329) 0:03:38.692 ********* 2025-05-14 02:11:37.114152 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-14 02:11:37.114325 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-14 02:11:37.114811 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-14 02:11:37.115332 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-14 02:11:37.118244 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-14 02:11:37.118268 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-14 02:11:37.118280 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-14 02:11:37.118292 | orchestrator | 2025-05-14 02:11:37.119208 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-14 02:11:37.119376 | orchestrator | Wednesday 14 May 2025 02:11:37 +0000 (0:00:01.054) 0:03:39.746 ********* 2025-05-14 02:11:37.554124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:11:37.554571 | orchestrator | 2025-05-14 02:11:37.555885 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-14 02:11:37.556618 | orchestrator | Wednesday 14 May 2025 02:11:37 +0000 (0:00:00.439) 0:03:40.186 ********* 2025-05-14 02:11:38.838388 | orchestrator | ok: [testbed-manager] 2025-05-14 02:11:38.841295 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:11:38.841368 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:11:38.841391 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:11:38.842538 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:11:38.844186 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:11:38.845309 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:11:38.848241 | orchestrator | 2025-05-14 02:11:38.848394 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-14 02:11:38.849458 | orchestrator | Wednesday 14 May 2025 02:11:38 +0000 (0:00:01.284) 0:03:41.470 ********* 2025-05-14 02:11:39.451955 | orchestrator | ok: [testbed-manager] 2025-05-14 02:11:39.454625 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:11:39.454813 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:11:39.454844 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:11:39.454861 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:11:39.455614 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:11:39.456366 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:11:39.457300 | orchestrator | 2025-05-14 02:11:39.457806 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-14 02:11:39.459672 | orchestrator | Wednesday 14 May 2025 02:11:39 +0000 (0:00:00.614) 0:03:42.085 ********* 2025-05-14 02:11:40.146521 | orchestrator | changed: [testbed-manager] 2025-05-14 02:11:40.146670 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:11:40.146812 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:11:40.147495 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:11:40.151774 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:11:40.152441 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:11:40.153261 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:11:40.154682 | orchestrator | 2025-05-14 02:11:40.155178 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-14 02:11:40.156215 | orchestrator | Wednesday 14 May 2025 02:11:40 +0000 (0:00:00.694) 0:03:42.780 ********* 2025-05-14 02:11:40.723871 | orchestrator | ok: [testbed-manager] 2025-05-14 02:11:40.724019 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:11:40.724104 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:11:40.725110 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:11:40.725560 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:11:40.726289 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:11:40.726672 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:11:40.727321 | orchestrator | 2025-05-14 02:11:40.727976 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-14 02:11:40.728978 | orchestrator | Wednesday 14 May 2025 02:11:40 +0000 (0:00:00.576) 0:03:43.356 ********* 2025-05-14 02:11:41.651981 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747186942.616268, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.652117 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747186981.6285899, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.652169 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747186979.0898278, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.652189 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747186966.6276715, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.652415 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747186986.8500066, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.653336 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747186978.411224, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.654078 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747187000.4499393, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.654954 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747186977.5743878, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.655375 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747186899.1953807, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.655945 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747186906.6646953, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.656456 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747186892.6876504, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.657059 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747186892.5873578, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.657934 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747186900.8630457, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.658333 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747186919.700315, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:11:41.658857 | orchestrator | 2025-05-14 02:11:41.659399 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-14 02:11:41.659832 | orchestrator | Wednesday 14 May 2025 02:11:41 +0000 (0:00:00.927) 0:03:44.283 ********* 2025-05-14 02:11:42.847024 | orchestrator | changed: [testbed-manager] 2025-05-14 02:11:42.847202 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:11:42.847232 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:11:42.847245 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:11:42.847341 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:11:42.847637 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:11:42.850878 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:11:42.851813 | orchestrator | 2025-05-14 02:11:42.852868 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-14 02:11:42.853697 | orchestrator | Wednesday 14 May 2025 02:11:42 +0000 (0:00:01.196) 0:03:45.480 ********* 2025-05-14 02:11:43.998309 | orchestrator | changed: [testbed-manager] 2025-05-14 02:11:43.998522 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:11:43.999443 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:11:44.000194 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:11:44.000785 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:11:44.001471 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:11:44.002168 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:11:44.003043 | orchestrator | 2025-05-14 02:11:44.003680 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-14 02:11:44.004373 | orchestrator | Wednesday 14 May 2025 02:11:43 +0000 (0:00:01.151) 0:03:46.631 ********* 2025-05-14 02:11:44.069954 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:11:44.107643 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:11:44.137110 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:11:44.165233 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:11:44.192262 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:11:44.250492 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:11:44.251283 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:11:44.252077 | orchestrator | 2025-05-14 02:11:44.253222 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-14 02:11:44.253418 | orchestrator | Wednesday 14 May 2025 02:11:44 +0000 (0:00:00.253) 0:03:46.885 ********* 2025-05-14 02:11:44.924057 | orchestrator | ok: [testbed-manager] 2025-05-14 02:11:44.924968 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:11:44.926282 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:11:44.926739 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:11:44.927787 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:11:44.928996 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:11:44.929182 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:11:44.932373 | orchestrator | 2025-05-14 02:11:44.933476 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-14 02:11:44.934350 | orchestrator | Wednesday 14 May 2025 02:11:44 +0000 (0:00:00.671) 0:03:47.556 ********* 2025-05-14 02:11:45.267823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:11:45.268002 | orchestrator | 2025-05-14 02:11:45.268452 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-14 02:11:45.268475 | orchestrator | Wednesday 14 May 2025 02:11:45 +0000 (0:00:00.345) 0:03:47.901 ********* 2025-05-14 02:11:53.412410 | orchestrator | ok: [testbed-manager] 2025-05-14 02:11:53.413052 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:11:53.414539 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:11:53.416794 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:11:53.417669 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:11:53.418872 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:11:53.419791 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:11:53.420236 | orchestrator | 2025-05-14 02:11:53.421031 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-14 02:11:53.423350 | orchestrator | Wednesday 14 May 2025 02:11:53 +0000 (0:00:08.142) 0:03:56.044 ********* 2025-05-14 02:11:54.682564 | orchestrator | ok: [testbed-manager] 2025-05-14 02:11:54.683175 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:11:54.683363 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:11:54.683827 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:11:54.685404 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:11:54.685467 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:11:54.685487 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:11:54.685507 | orchestrator | 2025-05-14 02:11:54.685527 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-14 02:11:54.685923 | orchestrator | Wednesday 14 May 2025 02:11:54 +0000 (0:00:01.270) 0:03:57.314 ********* 2025-05-14 02:11:55.829176 | orchestrator | ok: [testbed-manager] 2025-05-14 02:11:55.829807 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:11:55.829917 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:11:55.831405 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:11:55.831428 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:11:55.831863 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:11:55.832445 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:11:55.832947 | orchestrator | 2025-05-14 02:11:55.833418 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-14 02:11:55.834204 | orchestrator | Wednesday 14 May 2025 02:11:55 +0000 (0:00:01.146) 0:03:58.461 ********* 2025-05-14 02:11:56.240421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:11:56.240529 | orchestrator | 2025-05-14 02:11:56.240632 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-14 02:11:56.241399 | orchestrator | Wednesday 14 May 2025 02:11:56 +0000 (0:00:00.413) 0:03:58.874 ********* 2025-05-14 02:12:04.973606 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:04.977045 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:04.978446 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:04.978957 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:04.980235 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:04.981013 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:04.981393 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:04.981943 | orchestrator | 2025-05-14 02:12:04.982976 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-14 02:12:04.983058 | orchestrator | Wednesday 14 May 2025 02:12:04 +0000 (0:00:08.729) 0:04:07.604 ********* 2025-05-14 02:12:05.791006 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:05.791444 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:05.792405 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:05.793053 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:05.793351 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:05.794485 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:05.794687 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:05.795110 | orchestrator | 2025-05-14 02:12:05.797259 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-14 02:12:05.797955 | orchestrator | Wednesday 14 May 2025 02:12:05 +0000 (0:00:00.819) 0:04:08.424 ********* 2025-05-14 02:12:06.987053 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:06.987500 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:06.988826 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:06.989776 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:06.990457 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:06.990864 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:06.991340 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:06.992085 | orchestrator | 2025-05-14 02:12:06.992420 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-14 02:12:06.992683 | orchestrator | Wednesday 14 May 2025 02:12:06 +0000 (0:00:01.194) 0:04:09.619 ********* 2025-05-14 02:12:08.033674 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:08.034468 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:08.034506 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:08.034688 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:08.035418 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:08.035783 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:08.036461 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:08.037099 | orchestrator | 2025-05-14 02:12:08.038120 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-14 02:12:08.038480 | orchestrator | Wednesday 14 May 2025 02:12:08 +0000 (0:00:01.046) 0:04:10.665 ********* 2025-05-14 02:12:08.109953 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:08.183713 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:08.216828 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:08.247922 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:08.335447 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:08.336236 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:08.336433 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:08.337091 | orchestrator | 2025-05-14 02:12:08.337533 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-14 02:12:08.337887 | orchestrator | Wednesday 14 May 2025 02:12:08 +0000 (0:00:00.305) 0:04:10.970 ********* 2025-05-14 02:12:08.423810 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:08.457043 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:08.492904 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:08.530314 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:08.567884 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:08.638720 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:08.639488 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:08.639772 | orchestrator | 2025-05-14 02:12:08.640214 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-14 02:12:08.641132 | orchestrator | Wednesday 14 May 2025 02:12:08 +0000 (0:00:00.302) 0:04:11.273 ********* 2025-05-14 02:12:08.738644 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:08.775025 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:08.818635 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:08.864544 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:08.945631 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:08.947046 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:08.948320 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:08.951399 | orchestrator | 2025-05-14 02:12:08.951441 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-14 02:12:08.951456 | orchestrator | Wednesday 14 May 2025 02:12:08 +0000 (0:00:00.306) 0:04:11.579 ********* 2025-05-14 02:12:14.816865 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:14.816971 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:14.817444 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:14.818410 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:14.820026 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:14.821884 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:14.822496 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:14.823886 | orchestrator | 2025-05-14 02:12:14.824937 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-14 02:12:14.827173 | orchestrator | Wednesday 14 May 2025 02:12:14 +0000 (0:00:05.869) 0:04:17.449 ********* 2025-05-14 02:12:15.258940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:12:15.259927 | orchestrator | 2025-05-14 02:12:15.260080 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-14 02:12:15.260283 | orchestrator | Wednesday 14 May 2025 02:12:15 +0000 (0:00:00.444) 0:04:17.893 ********* 2025-05-14 02:12:15.336028 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-14 02:12:15.336232 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-14 02:12:15.336290 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-14 02:12:15.389522 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-14 02:12:15.390483 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:12:15.391631 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-14 02:12:15.392594 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-14 02:12:15.427273 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:12:15.428523 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-14 02:12:15.468619 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-14 02:12:15.469496 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:12:15.470676 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-14 02:12:15.473986 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-14 02:12:15.502903 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:12:15.580088 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-14 02:12:15.580691 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:12:15.581438 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-14 02:12:15.584819 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:12:15.585237 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-14 02:12:15.586150 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-14 02:12:15.586854 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:12:15.587416 | orchestrator | 2025-05-14 02:12:15.588282 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-14 02:12:15.588827 | orchestrator | Wednesday 14 May 2025 02:12:15 +0000 (0:00:00.320) 0:04:18.214 ********* 2025-05-14 02:12:15.974582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:12:15.974775 | orchestrator | 2025-05-14 02:12:15.975621 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-14 02:12:15.975932 | orchestrator | Wednesday 14 May 2025 02:12:15 +0000 (0:00:00.392) 0:04:18.607 ********* 2025-05-14 02:12:16.043862 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-14 02:12:16.078500 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:12:16.079079 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-14 02:12:16.118288 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-14 02:12:16.120801 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:12:16.123478 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-14 02:12:16.167815 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:12:16.168578 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-14 02:12:16.199107 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:12:16.289505 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:12:16.290239 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-14 02:12:16.291296 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:12:16.292814 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-14 02:12:16.294672 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:12:16.295693 | orchestrator | 2025-05-14 02:12:16.296612 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-14 02:12:16.297262 | orchestrator | Wednesday 14 May 2025 02:12:16 +0000 (0:00:00.316) 0:04:18.923 ********* 2025-05-14 02:12:16.697070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:12:16.697850 | orchestrator | 2025-05-14 02:12:16.704453 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-14 02:12:16.704552 | orchestrator | Wednesday 14 May 2025 02:12:16 +0000 (0:00:00.407) 0:04:19.330 ********* 2025-05-14 02:12:51.560592 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:51.560706 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:51.561036 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:51.562119 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:51.562903 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:51.564214 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:51.564642 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:51.565292 | orchestrator | 2025-05-14 02:12:51.567833 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-14 02:12:51.568206 | orchestrator | Wednesday 14 May 2025 02:12:51 +0000 (0:00:34.857) 0:04:54.188 ********* 2025-05-14 02:12:59.530325 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:59.532172 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:59.533182 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:59.534235 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:59.535149 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:59.535429 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:59.535559 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:59.536936 | orchestrator | 2025-05-14 02:12:59.539675 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-14 02:12:59.540666 | orchestrator | Wednesday 14 May 2025 02:12:59 +0000 (0:00:07.971) 0:05:02.160 ********* 2025-05-14 02:13:07.338306 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:07.338448 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:07.338998 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:07.339297 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:07.341008 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:07.342486 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:07.343337 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:07.343932 | orchestrator | 2025-05-14 02:13:07.344215 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-14 02:13:07.344619 | orchestrator | Wednesday 14 May 2025 02:13:07 +0000 (0:00:07.810) 0:05:09.970 ********* 2025-05-14 02:13:08.979815 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:08.980819 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:08.983272 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:08.983625 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:08.983651 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:08.983663 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:08.984168 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:08.984631 | orchestrator | 2025-05-14 02:13:08.985350 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-14 02:13:08.986168 | orchestrator | Wednesday 14 May 2025 02:13:08 +0000 (0:00:01.644) 0:05:11.614 ********* 2025-05-14 02:13:14.570353 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:14.570465 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:14.571491 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:14.573350 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:14.575665 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:14.576898 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:14.578354 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:14.579158 | orchestrator | 2025-05-14 02:13:14.579804 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-14 02:13:14.581092 | orchestrator | Wednesday 14 May 2025 02:13:14 +0000 (0:00:05.587) 0:05:17.202 ********* 2025-05-14 02:13:14.968561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:13:14.968661 | orchestrator | 2025-05-14 02:13:14.969470 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-14 02:13:14.970218 | orchestrator | Wednesday 14 May 2025 02:13:14 +0000 (0:00:00.400) 0:05:17.602 ********* 2025-05-14 02:13:15.731942 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:15.732050 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:15.733380 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:15.734789 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:15.735338 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:15.736270 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:15.737261 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:15.738450 | orchestrator | 2025-05-14 02:13:15.739331 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-14 02:13:15.739801 | orchestrator | Wednesday 14 May 2025 02:13:15 +0000 (0:00:00.759) 0:05:18.361 ********* 2025-05-14 02:13:17.453594 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:17.453748 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:17.454145 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:17.455496 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:17.457410 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:17.457550 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:17.458221 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:17.459026 | orchestrator | 2025-05-14 02:13:17.459671 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-14 02:13:17.460811 | orchestrator | Wednesday 14 May 2025 02:13:17 +0000 (0:00:01.724) 0:05:20.086 ********* 2025-05-14 02:13:18.268637 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:18.269105 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:18.270798 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:18.271605 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:18.272263 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:18.272933 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:18.273346 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:18.274619 | orchestrator | 2025-05-14 02:13:18.274960 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-14 02:13:18.275333 | orchestrator | Wednesday 14 May 2025 02:13:18 +0000 (0:00:00.815) 0:05:20.901 ********* 2025-05-14 02:13:18.352359 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:13:18.390720 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:13:18.430608 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:13:18.469540 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:13:18.513330 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:13:18.589835 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:13:18.590098 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:13:18.590630 | orchestrator | 2025-05-14 02:13:18.591064 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-14 02:13:18.594307 | orchestrator | Wednesday 14 May 2025 02:13:18 +0000 (0:00:00.322) 0:05:21.224 ********* 2025-05-14 02:13:18.656474 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:13:18.703138 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:13:18.737447 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:13:18.772718 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:13:18.812366 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:13:18.994330 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:13:18.997986 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:13:18.998096 | orchestrator | 2025-05-14 02:13:18.998116 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-14 02:13:18.999069 | orchestrator | Wednesday 14 May 2025 02:13:18 +0000 (0:00:00.402) 0:05:21.627 ********* 2025-05-14 02:13:19.112868 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:19.151003 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:19.206688 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:19.236825 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:19.317637 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:19.317798 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:19.317815 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:19.317936 | orchestrator | 2025-05-14 02:13:19.318317 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-14 02:13:19.319213 | orchestrator | Wednesday 14 May 2025 02:13:19 +0000 (0:00:00.322) 0:05:21.949 ********* 2025-05-14 02:13:19.390683 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:13:19.461570 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:13:19.494699 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:13:19.525350 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:13:19.592811 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:13:19.593497 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:13:19.597476 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:13:19.597506 | orchestrator | 2025-05-14 02:13:19.597515 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-14 02:13:19.597548 | orchestrator | Wednesday 14 May 2025 02:13:19 +0000 (0:00:00.278) 0:05:22.227 ********* 2025-05-14 02:13:19.713179 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:19.749332 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:19.785164 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:19.820550 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:19.910142 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:19.911083 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:19.911813 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:19.915473 | orchestrator | 2025-05-14 02:13:19.916267 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-14 02:13:19.917158 | orchestrator | Wednesday 14 May 2025 02:13:19 +0000 (0:00:00.316) 0:05:22.544 ********* 2025-05-14 02:13:19.980350 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:13:20.018697 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:13:20.064033 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:13:20.095983 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:13:20.196041 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:13:20.196274 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:13:20.197024 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:13:20.197569 | orchestrator | 2025-05-14 02:13:20.198316 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-14 02:13:20.199034 | orchestrator | Wednesday 14 May 2025 02:13:20 +0000 (0:00:00.286) 0:05:22.830 ********* 2025-05-14 02:13:20.338234 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:13:20.373527 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:13:20.414743 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:13:20.446602 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:13:20.515371 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:13:20.515574 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:13:20.516313 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:13:20.517361 | orchestrator | 2025-05-14 02:13:20.518240 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-14 02:13:20.518860 | orchestrator | Wednesday 14 May 2025 02:13:20 +0000 (0:00:00.317) 0:05:23.148 ********* 2025-05-14 02:13:21.074992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:13:21.075217 | orchestrator | 2025-05-14 02:13:21.075870 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-14 02:13:21.079100 | orchestrator | Wednesday 14 May 2025 02:13:21 +0000 (0:00:00.559) 0:05:23.707 ********* 2025-05-14 02:13:21.927661 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:21.928628 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:21.930445 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:21.931670 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:21.932567 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:21.933649 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:21.934140 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:21.934751 | orchestrator | 2025-05-14 02:13:21.935668 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-14 02:13:21.936194 | orchestrator | Wednesday 14 May 2025 02:13:21 +0000 (0:00:00.852) 0:05:24.560 ********* 2025-05-14 02:13:24.726389 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:24.727905 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:24.729042 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:24.730128 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:24.732364 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:24.732596 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:24.734714 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:24.735855 | orchestrator | 2025-05-14 02:13:24.736731 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-14 02:13:24.737283 | orchestrator | Wednesday 14 May 2025 02:13:24 +0000 (0:00:02.794) 0:05:27.354 ********* 2025-05-14 02:13:24.796375 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-14 02:13:24.875739 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-14 02:13:24.875870 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-14 02:13:24.876302 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-14 02:13:24.877412 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-14 02:13:24.877593 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-14 02:13:24.958965 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:13:24.959137 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-14 02:13:24.959435 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-14 02:13:24.960307 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-14 02:13:25.047045 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:13:25.047604 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-14 02:13:25.048325 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-14 02:13:25.048964 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-14 02:13:25.140541 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:13:25.140799 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-14 02:13:25.167803 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-14 02:13:25.167873 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-14 02:13:25.228045 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:13:25.230241 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-14 02:13:25.232976 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-14 02:13:25.233214 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-14 02:13:25.373941 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:13:25.374086 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:13:25.374239 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-14 02:13:25.374807 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-14 02:13:25.375335 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-14 02:13:25.375996 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:13:25.376680 | orchestrator | 2025-05-14 02:13:25.377289 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-14 02:13:25.377612 | orchestrator | Wednesday 14 May 2025 02:13:25 +0000 (0:00:00.652) 0:05:28.007 ********* 2025-05-14 02:13:31.680204 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:31.682808 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:31.682871 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:31.683642 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:31.684594 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:31.684833 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:31.685369 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:31.685967 | orchestrator | 2025-05-14 02:13:31.686619 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-14 02:13:31.687107 | orchestrator | Wednesday 14 May 2025 02:13:31 +0000 (0:00:06.304) 0:05:34.311 ********* 2025-05-14 02:13:32.695886 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:32.696394 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:32.697382 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:32.698923 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:32.700813 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:32.702748 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:32.703241 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:32.704323 | orchestrator | 2025-05-14 02:13:32.705670 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-14 02:13:32.706597 | orchestrator | Wednesday 14 May 2025 02:13:32 +0000 (0:00:01.016) 0:05:35.328 ********* 2025-05-14 02:13:40.703340 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:40.703421 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:40.704082 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:40.704229 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:40.706862 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:40.706989 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:40.707906 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:40.708927 | orchestrator | 2025-05-14 02:13:40.709300 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-14 02:13:40.710051 | orchestrator | Wednesday 14 May 2025 02:13:40 +0000 (0:00:08.007) 0:05:43.336 ********* 2025-05-14 02:13:43.917481 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:43.917556 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:43.917967 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:43.919186 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:43.920815 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:43.921381 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:43.922150 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:43.923355 | orchestrator | 2025-05-14 02:13:43.924014 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-14 02:13:43.924900 | orchestrator | Wednesday 14 May 2025 02:13:43 +0000 (0:00:03.212) 0:05:46.548 ********* 2025-05-14 02:13:45.154043 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:45.154149 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:45.154333 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:45.155083 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:45.155571 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:45.155595 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:45.156819 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:45.157341 | orchestrator | 2025-05-14 02:13:45.157903 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-14 02:13:45.158040 | orchestrator | Wednesday 14 May 2025 02:13:45 +0000 (0:00:01.240) 0:05:47.788 ********* 2025-05-14 02:13:46.561116 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:46.562177 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:46.565576 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:46.565622 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:46.565642 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:46.565662 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:46.565675 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:46.565835 | orchestrator | 2025-05-14 02:13:46.566441 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-14 02:13:46.567540 | orchestrator | Wednesday 14 May 2025 02:13:46 +0000 (0:00:01.406) 0:05:49.194 ********* 2025-05-14 02:13:46.775658 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:13:46.838973 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:13:46.902644 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:13:46.970504 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:13:47.182744 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:13:47.182913 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:13:47.185325 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:47.185395 | orchestrator | 2025-05-14 02:13:47.185409 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-14 02:13:47.185443 | orchestrator | Wednesday 14 May 2025 02:13:47 +0000 (0:00:00.620) 0:05:49.815 ********* 2025-05-14 02:13:56.778712 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:56.779960 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:56.781670 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:56.787683 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:56.787719 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:56.789000 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:56.789680 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:56.790244 | orchestrator | 2025-05-14 02:13:56.791830 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-14 02:13:56.791919 | orchestrator | Wednesday 14 May 2025 02:13:56 +0000 (0:00:09.596) 0:05:59.411 ********* 2025-05-14 02:13:57.744312 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:57.744416 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:57.745429 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:57.745863 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:57.746358 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:57.746775 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:57.747530 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:57.747943 | orchestrator | 2025-05-14 02:13:57.749146 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-14 02:13:57.749247 | orchestrator | Wednesday 14 May 2025 02:13:57 +0000 (0:00:00.964) 0:06:00.376 ********* 2025-05-14 02:14:10.486975 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:10.487099 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:10.487116 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:10.489338 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:10.489367 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:10.489379 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:10.489391 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:10.489403 | orchestrator | 2025-05-14 02:14:10.489416 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-14 02:14:10.489430 | orchestrator | Wednesday 14 May 2025 02:14:10 +0000 (0:00:12.739) 0:06:13.116 ********* 2025-05-14 02:14:23.040133 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:23.040238 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:23.040253 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:23.040365 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:23.041062 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:23.041929 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:23.043273 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:23.043529 | orchestrator | 2025-05-14 02:14:23.044176 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-14 02:14:23.044833 | orchestrator | Wednesday 14 May 2025 02:14:23 +0000 (0:00:12.555) 0:06:25.671 ********* 2025-05-14 02:14:23.381843 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-14 02:14:24.178274 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-14 02:14:24.178861 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-14 02:14:24.179978 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-14 02:14:24.180676 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-14 02:14:24.181703 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-14 02:14:24.183125 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-14 02:14:24.183493 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-14 02:14:24.184131 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-14 02:14:24.184770 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-14 02:14:24.185822 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-14 02:14:24.186607 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-14 02:14:24.187611 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-14 02:14:24.188268 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-14 02:14:24.188654 | orchestrator | 2025-05-14 02:14:24.189912 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-14 02:14:24.189939 | orchestrator | Wednesday 14 May 2025 02:14:24 +0000 (0:00:01.138) 0:06:26.810 ********* 2025-05-14 02:14:24.302248 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:24.359476 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:24.417087 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:24.488612 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:24.548241 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:24.646313 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:24.646496 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:24.646516 | orchestrator | 2025-05-14 02:14:24.647216 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-14 02:14:24.647241 | orchestrator | Wednesday 14 May 2025 02:14:24 +0000 (0:00:00.470) 0:06:27.281 ********* 2025-05-14 02:14:28.171943 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:28.172057 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:28.172137 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:28.172571 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:28.173373 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:28.173598 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:28.173962 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:28.174813 | orchestrator | 2025-05-14 02:14:28.175107 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-14 02:14:28.175129 | orchestrator | Wednesday 14 May 2025 02:14:28 +0000 (0:00:03.521) 0:06:30.802 ********* 2025-05-14 02:14:28.323379 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:28.382729 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:28.620850 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:28.686369 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:28.749398 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:28.875369 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:28.876112 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:28.876910 | orchestrator | 2025-05-14 02:14:28.877874 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-14 02:14:28.878760 | orchestrator | Wednesday 14 May 2025 02:14:28 +0000 (0:00:00.704) 0:06:31.507 ********* 2025-05-14 02:14:28.945379 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-14 02:14:28.946226 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-14 02:14:29.025625 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:29.026296 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-14 02:14:29.027772 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-14 02:14:29.122319 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:29.122439 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-14 02:14:29.122646 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-14 02:14:29.203291 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:29.203606 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-14 02:14:29.204258 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-14 02:14:29.268646 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:29.268834 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-14 02:14:29.269465 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-14 02:14:29.342858 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:29.344541 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-14 02:14:29.345466 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-14 02:14:29.464233 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:29.464420 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-14 02:14:29.465686 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-14 02:14:29.465919 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:29.466528 | orchestrator | 2025-05-14 02:14:29.467291 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-14 02:14:29.468998 | orchestrator | Wednesday 14 May 2025 02:14:29 +0000 (0:00:00.589) 0:06:32.096 ********* 2025-05-14 02:14:29.611552 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:29.677293 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:29.749820 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:29.824173 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:29.890228 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:29.990999 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:29.991942 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:29.993092 | orchestrator | 2025-05-14 02:14:29.993734 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-14 02:14:29.996016 | orchestrator | Wednesday 14 May 2025 02:14:29 +0000 (0:00:00.526) 0:06:32.623 ********* 2025-05-14 02:14:30.134865 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:30.203439 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:30.271128 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:30.331007 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:30.413252 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:30.525178 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:30.525566 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:30.526632 | orchestrator | 2025-05-14 02:14:30.527402 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-14 02:14:30.528671 | orchestrator | Wednesday 14 May 2025 02:14:30 +0000 (0:00:00.537) 0:06:33.160 ********* 2025-05-14 02:14:30.650067 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:30.731573 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:30.794608 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:30.855949 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:30.940156 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:31.073936 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:31.074150 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:31.076096 | orchestrator | 2025-05-14 02:14:31.076438 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-14 02:14:31.077645 | orchestrator | Wednesday 14 May 2025 02:14:31 +0000 (0:00:00.547) 0:06:33.707 ********* 2025-05-14 02:14:36.969091 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:36.970186 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:36.971092 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:36.972876 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:36.973757 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:36.975014 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:36.975752 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:36.976578 | orchestrator | 2025-05-14 02:14:36.977301 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-14 02:14:36.977829 | orchestrator | Wednesday 14 May 2025 02:14:36 +0000 (0:00:05.893) 0:06:39.600 ********* 2025-05-14 02:14:37.796879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:14:37.797043 | orchestrator | 2025-05-14 02:14:37.797899 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-14 02:14:37.798876 | orchestrator | Wednesday 14 May 2025 02:14:37 +0000 (0:00:00.829) 0:06:40.429 ********* 2025-05-14 02:14:38.210931 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:38.638945 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:38.639461 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:38.641101 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:38.641420 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:38.642573 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:38.643600 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:38.644330 | orchestrator | 2025-05-14 02:14:38.645202 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-14 02:14:38.646141 | orchestrator | Wednesday 14 May 2025 02:14:38 +0000 (0:00:00.840) 0:06:41.270 ********* 2025-05-14 02:14:39.089240 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:39.696116 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:39.696209 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:39.696243 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:39.696306 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:39.696793 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:39.697359 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:39.697976 | orchestrator | 2025-05-14 02:14:39.698652 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-14 02:14:39.700991 | orchestrator | Wednesday 14 May 2025 02:14:39 +0000 (0:00:01.057) 0:06:42.328 ********* 2025-05-14 02:14:41.142205 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:41.142434 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:41.142469 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:41.143961 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:41.144712 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:41.151518 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:41.151574 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:41.151585 | orchestrator | 2025-05-14 02:14:41.151598 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-14 02:14:41.151611 | orchestrator | Wednesday 14 May 2025 02:14:41 +0000 (0:00:01.445) 0:06:43.773 ********* 2025-05-14 02:14:41.283718 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:42.512133 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:42.512795 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:42.514264 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:42.514520 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:42.516132 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:42.516648 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:42.517393 | orchestrator | 2025-05-14 02:14:42.518340 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-14 02:14:42.518635 | orchestrator | Wednesday 14 May 2025 02:14:42 +0000 (0:00:01.369) 0:06:45.143 ********* 2025-05-14 02:14:43.881957 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:43.882173 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:43.884347 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:43.885622 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:43.886669 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:43.887542 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:43.888314 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:43.889030 | orchestrator | 2025-05-14 02:14:43.890090 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-14 02:14:43.890886 | orchestrator | Wednesday 14 May 2025 02:14:43 +0000 (0:00:01.370) 0:06:46.513 ********* 2025-05-14 02:14:45.421910 | orchestrator | changed: [testbed-manager] 2025-05-14 02:14:45.422505 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:45.424133 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:45.424928 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:45.427146 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:45.427796 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:45.428939 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:45.429660 | orchestrator | 2025-05-14 02:14:45.430296 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-14 02:14:45.431257 | orchestrator | Wednesday 14 May 2025 02:14:45 +0000 (0:00:01.540) 0:06:48.053 ********* 2025-05-14 02:14:46.392484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:14:46.392666 | orchestrator | 2025-05-14 02:14:46.394127 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-14 02:14:46.395153 | orchestrator | Wednesday 14 May 2025 02:14:46 +0000 (0:00:00.971) 0:06:49.025 ********* 2025-05-14 02:14:47.671456 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:47.673082 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:47.673328 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:47.674234 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:47.676225 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:47.676688 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:47.677666 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:47.678389 | orchestrator | 2025-05-14 02:14:47.679415 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-14 02:14:47.680222 | orchestrator | Wednesday 14 May 2025 02:14:47 +0000 (0:00:01.278) 0:06:50.303 ********* 2025-05-14 02:14:48.786150 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:48.786249 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:48.787521 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:48.787998 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:48.788847 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:48.789645 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:48.790175 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:48.791049 | orchestrator | 2025-05-14 02:14:48.791504 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-14 02:14:48.791983 | orchestrator | Wednesday 14 May 2025 02:14:48 +0000 (0:00:01.113) 0:06:51.417 ********* 2025-05-14 02:14:49.864463 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:49.864684 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:49.864706 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:49.864718 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:49.864767 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:49.867982 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:49.868011 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:49.868022 | orchestrator | 2025-05-14 02:14:49.868037 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-14 02:14:49.868050 | orchestrator | Wednesday 14 May 2025 02:14:49 +0000 (0:00:01.077) 0:06:52.494 ********* 2025-05-14 02:14:51.271800 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:51.272414 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:51.273765 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:51.274433 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:51.274849 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:51.276043 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:51.276854 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:51.279088 | orchestrator | 2025-05-14 02:14:51.279115 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-14 02:14:51.282224 | orchestrator | Wednesday 14 May 2025 02:14:51 +0000 (0:00:01.408) 0:06:53.903 ********* 2025-05-14 02:14:52.692457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:14:52.695976 | orchestrator | 2025-05-14 02:14:52.696023 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:14:52.698867 | orchestrator | Wednesday 14 May 2025 02:14:52 +0000 (0:00:01.022) 0:06:54.926 ********* 2025-05-14 02:14:52.698905 | orchestrator | 2025-05-14 02:14:52.699853 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:14:52.700254 | orchestrator | Wednesday 14 May 2025 02:14:52 +0000 (0:00:00.065) 0:06:54.991 ********* 2025-05-14 02:14:52.701148 | orchestrator | 2025-05-14 02:14:52.701653 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:14:52.701944 | orchestrator | Wednesday 14 May 2025 02:14:52 +0000 (0:00:00.053) 0:06:55.045 ********* 2025-05-14 02:14:52.702239 | orchestrator | 2025-05-14 02:14:52.702548 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:14:52.703034 | orchestrator | Wednesday 14 May 2025 02:14:52 +0000 (0:00:00.052) 0:06:55.098 ********* 2025-05-14 02:14:52.703366 | orchestrator | 2025-05-14 02:14:52.703545 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:14:52.706479 | orchestrator | Wednesday 14 May 2025 02:14:52 +0000 (0:00:00.064) 0:06:55.162 ********* 2025-05-14 02:14:52.706506 | orchestrator | 2025-05-14 02:14:52.706518 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:14:52.706557 | orchestrator | Wednesday 14 May 2025 02:14:52 +0000 (0:00:00.055) 0:06:55.217 ********* 2025-05-14 02:14:52.706569 | orchestrator | 2025-05-14 02:14:52.706580 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:14:52.706591 | orchestrator | Wednesday 14 May 2025 02:14:52 +0000 (0:00:00.049) 0:06:55.267 ********* 2025-05-14 02:14:52.706601 | orchestrator | 2025-05-14 02:14:52.706612 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-14 02:14:52.706623 | orchestrator | Wednesday 14 May 2025 02:14:52 +0000 (0:00:00.055) 0:06:55.323 ********* 2025-05-14 02:14:53.799103 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:53.799321 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:53.800115 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:53.800282 | orchestrator | 2025-05-14 02:14:53.800817 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-14 02:14:53.802511 | orchestrator | Wednesday 14 May 2025 02:14:53 +0000 (0:00:01.109) 0:06:56.432 ********* 2025-05-14 02:14:56.094696 | orchestrator | changed: [testbed-manager] 2025-05-14 02:14:56.094887 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:56.094910 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:56.094921 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:56.094932 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:56.094942 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:56.094953 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:56.094965 | orchestrator | 2025-05-14 02:14:56.094977 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-14 02:14:56.095058 | orchestrator | Wednesday 14 May 2025 02:14:56 +0000 (0:00:02.290) 0:06:58.723 ********* 2025-05-14 02:14:57.219492 | orchestrator | changed: [testbed-manager] 2025-05-14 02:14:57.220222 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:57.225041 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:57.225089 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:57.225616 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:57.228285 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:57.228348 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:57.228362 | orchestrator | 2025-05-14 02:14:57.228376 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-14 02:14:57.228525 | orchestrator | Wednesday 14 May 2025 02:14:57 +0000 (0:00:01.126) 0:06:59.849 ********* 2025-05-14 02:14:57.362595 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:59.344498 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:59.344609 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:59.344970 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:59.346068 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:59.346700 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:59.348513 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:59.349813 | orchestrator | 2025-05-14 02:14:59.350507 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-14 02:14:59.350774 | orchestrator | Wednesday 14 May 2025 02:14:59 +0000 (0:00:02.125) 0:07:01.974 ********* 2025-05-14 02:14:59.452180 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:59.452275 | orchestrator | 2025-05-14 02:14:59.453374 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-14 02:14:59.453779 | orchestrator | Wednesday 14 May 2025 02:14:59 +0000 (0:00:00.109) 0:07:02.084 ********* 2025-05-14 02:15:00.527171 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:00.527356 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:00.528101 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:00.528961 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:00.529030 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:00.530151 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:00.530582 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:00.531287 | orchestrator | 2025-05-14 02:15:00.531784 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-14 02:15:00.532149 | orchestrator | Wednesday 14 May 2025 02:15:00 +0000 (0:00:01.076) 0:07:03.160 ********* 2025-05-14 02:15:00.757529 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:00.821676 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:00.879028 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:01.071706 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:01.183152 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:01.183388 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:01.184837 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:01.185318 | orchestrator | 2025-05-14 02:15:01.186013 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-14 02:15:01.186693 | orchestrator | Wednesday 14 May 2025 02:15:01 +0000 (0:00:00.656) 0:07:03.817 ********* 2025-05-14 02:15:01.968638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:15:01.968929 | orchestrator | 2025-05-14 02:15:01.969806 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-14 02:15:01.970525 | orchestrator | Wednesday 14 May 2025 02:15:01 +0000 (0:00:00.784) 0:07:04.601 ********* 2025-05-14 02:15:02.759170 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:02.760935 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:02.761334 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:02.762464 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:02.763364 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:02.764724 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:02.765163 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:02.766131 | orchestrator | 2025-05-14 02:15:02.766520 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-14 02:15:02.767313 | orchestrator | Wednesday 14 May 2025 02:15:02 +0000 (0:00:00.789) 0:07:05.391 ********* 2025-05-14 02:15:05.275300 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-14 02:15:05.278539 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-14 02:15:05.278593 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-14 02:15:05.278933 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-14 02:15:05.279299 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-14 02:15:05.280777 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-14 02:15:05.281821 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-14 02:15:05.282062 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-14 02:15:05.285701 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-14 02:15:05.286574 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-14 02:15:05.287261 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-14 02:15:05.287902 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-14 02:15:05.288799 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-14 02:15:05.289474 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-14 02:15:05.290285 | orchestrator | 2025-05-14 02:15:05.290500 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-14 02:15:05.291521 | orchestrator | Wednesday 14 May 2025 02:15:05 +0000 (0:00:02.515) 0:07:07.906 ********* 2025-05-14 02:15:05.405561 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:05.479058 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:05.541444 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:05.618869 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:05.688264 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:05.798329 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:05.801478 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:05.801524 | orchestrator | 2025-05-14 02:15:05.801537 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-14 02:15:05.801585 | orchestrator | Wednesday 14 May 2025 02:15:05 +0000 (0:00:00.521) 0:07:08.428 ********* 2025-05-14 02:15:06.599287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:15:06.599397 | orchestrator | 2025-05-14 02:15:06.602486 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-14 02:15:06.602532 | orchestrator | Wednesday 14 May 2025 02:15:06 +0000 (0:00:00.801) 0:07:09.230 ********* 2025-05-14 02:15:07.451233 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:07.451401 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:07.452276 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:07.453391 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:07.457225 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:07.457277 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:07.457291 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:07.457305 | orchestrator | 2025-05-14 02:15:07.457906 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-14 02:15:07.459033 | orchestrator | Wednesday 14 May 2025 02:15:07 +0000 (0:00:00.852) 0:07:10.082 ********* 2025-05-14 02:15:07.859829 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:07.924454 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:08.466602 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:08.467055 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:08.468888 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:08.469499 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:08.470340 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:08.470983 | orchestrator | 2025-05-14 02:15:08.471565 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-14 02:15:08.472254 | orchestrator | Wednesday 14 May 2025 02:15:08 +0000 (0:00:01.016) 0:07:11.098 ********* 2025-05-14 02:15:08.597595 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:08.661863 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:08.729432 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:08.792466 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:08.854358 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:08.948445 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:08.948619 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:08.949336 | orchestrator | 2025-05-14 02:15:08.949718 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-14 02:15:08.950002 | orchestrator | Wednesday 14 May 2025 02:15:08 +0000 (0:00:00.482) 0:07:11.581 ********* 2025-05-14 02:15:10.321653 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:10.322387 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:10.323709 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:10.327322 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:10.327412 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:10.327426 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:10.327492 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:10.328291 | orchestrator | 2025-05-14 02:15:10.328751 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-14 02:15:10.329618 | orchestrator | Wednesday 14 May 2025 02:15:10 +0000 (0:00:01.371) 0:07:12.953 ********* 2025-05-14 02:15:10.470286 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:10.533204 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:10.598491 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:10.674385 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:10.748856 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:10.839124 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:10.840333 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:10.844487 | orchestrator | 2025-05-14 02:15:10.844527 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-14 02:15:10.844569 | orchestrator | Wednesday 14 May 2025 02:15:10 +0000 (0:00:00.519) 0:07:13.472 ********* 2025-05-14 02:15:12.958655 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:12.964107 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:12.965932 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:12.966987 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:12.968216 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:12.968814 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:12.969405 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:12.970269 | orchestrator | 2025-05-14 02:15:12.971018 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-14 02:15:12.972386 | orchestrator | Wednesday 14 May 2025 02:15:12 +0000 (0:00:02.117) 0:07:15.590 ********* 2025-05-14 02:15:14.242333 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:14.242438 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:14.242509 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:14.243946 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:14.244362 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:14.244992 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:14.245285 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:14.245932 | orchestrator | 2025-05-14 02:15:14.246418 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-14 02:15:14.246872 | orchestrator | Wednesday 14 May 2025 02:15:14 +0000 (0:00:01.281) 0:07:16.871 ********* 2025-05-14 02:15:15.990405 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:15.991965 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:15.992805 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:15.993414 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:15.994327 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:15.994846 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:15.995601 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:15.996102 | orchestrator | 2025-05-14 02:15:15.996934 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-14 02:15:15.997400 | orchestrator | Wednesday 14 May 2025 02:15:15 +0000 (0:00:01.750) 0:07:18.622 ********* 2025-05-14 02:15:17.685949 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:17.686237 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:17.691225 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:17.693071 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:17.693407 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:17.694319 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:17.694824 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:17.695697 | orchestrator | 2025-05-14 02:15:17.696929 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-14 02:15:17.697202 | orchestrator | Wednesday 14 May 2025 02:15:17 +0000 (0:00:01.695) 0:07:20.317 ********* 2025-05-14 02:15:18.323359 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:18.746785 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:18.746891 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:18.747372 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:18.747692 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:18.747853 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:18.748162 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:18.749720 | orchestrator | 2025-05-14 02:15:18.750351 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-14 02:15:18.751174 | orchestrator | Wednesday 14 May 2025 02:15:18 +0000 (0:00:01.060) 0:07:21.378 ********* 2025-05-14 02:15:18.880951 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:18.944023 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:19.016202 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:19.082197 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:19.144492 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:19.566905 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:19.567065 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:19.567866 | orchestrator | 2025-05-14 02:15:19.568483 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-14 02:15:19.568913 | orchestrator | Wednesday 14 May 2025 02:15:19 +0000 (0:00:00.822) 0:07:22.201 ********* 2025-05-14 02:15:19.712854 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:19.778812 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:19.851055 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:19.907279 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:19.962467 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:20.040564 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:20.040777 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:20.041031 | orchestrator | 2025-05-14 02:15:20.041650 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-14 02:15:20.042345 | orchestrator | Wednesday 14 May 2025 02:15:20 +0000 (0:00:00.473) 0:07:22.674 ********* 2025-05-14 02:15:20.151889 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:20.208198 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:20.269306 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:20.320617 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:20.376618 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:20.458636 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:20.459768 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:20.460823 | orchestrator | 2025-05-14 02:15:20.461647 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-14 02:15:20.462836 | orchestrator | Wednesday 14 May 2025 02:15:20 +0000 (0:00:00.420) 0:07:23.094 ********* 2025-05-14 02:15:20.571960 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:20.632787 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:20.820509 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:20.876136 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:20.931268 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:21.024862 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:21.025291 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:21.026331 | orchestrator | 2025-05-14 02:15:21.029419 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-14 02:15:21.029460 | orchestrator | Wednesday 14 May 2025 02:15:21 +0000 (0:00:00.564) 0:07:23.659 ********* 2025-05-14 02:15:21.132996 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:21.190405 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:21.243392 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:21.298767 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:21.357109 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:21.445233 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:21.445411 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:21.448391 | orchestrator | 2025-05-14 02:15:21.448421 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-14 02:15:21.448435 | orchestrator | Wednesday 14 May 2025 02:15:21 +0000 (0:00:00.419) 0:07:24.079 ********* 2025-05-14 02:15:27.162613 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:27.162846 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:27.164709 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:27.165338 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:27.166482 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:27.167551 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:27.169213 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:27.170643 | orchestrator | 2025-05-14 02:15:27.172375 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-14 02:15:27.173809 | orchestrator | Wednesday 14 May 2025 02:15:27 +0000 (0:00:05.715) 0:07:29.794 ********* 2025-05-14 02:15:27.300470 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:27.366872 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:27.437279 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:27.502269 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:27.565921 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:27.686259 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:27.687482 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:27.688335 | orchestrator | 2025-05-14 02:15:27.690864 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-14 02:15:27.692273 | orchestrator | Wednesday 14 May 2025 02:15:27 +0000 (0:00:00.526) 0:07:30.320 ********* 2025-05-14 02:15:28.734226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:15:28.734589 | orchestrator | 2025-05-14 02:15:28.736290 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-14 02:15:28.737561 | orchestrator | Wednesday 14 May 2025 02:15:28 +0000 (0:00:01.045) 0:07:31.365 ********* 2025-05-14 02:15:30.500445 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:30.501329 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:30.503359 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:30.509612 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:30.510534 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:30.510601 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:30.511325 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:30.511778 | orchestrator | 2025-05-14 02:15:30.512154 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-14 02:15:30.512593 | orchestrator | Wednesday 14 May 2025 02:15:30 +0000 (0:00:01.768) 0:07:33.133 ********* 2025-05-14 02:15:31.618325 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:31.619211 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:31.619658 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:31.620087 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:31.620964 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:31.621547 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:31.622077 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:31.622662 | orchestrator | 2025-05-14 02:15:31.622983 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-14 02:15:31.623352 | orchestrator | Wednesday 14 May 2025 02:15:31 +0000 (0:00:01.116) 0:07:34.250 ********* 2025-05-14 02:15:32.450387 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:32.451110 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:32.453161 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:32.454346 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:32.456457 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:32.457932 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:32.458251 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:32.458637 | orchestrator | 2025-05-14 02:15:32.459650 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-14 02:15:32.460274 | orchestrator | Wednesday 14 May 2025 02:15:32 +0000 (0:00:00.832) 0:07:35.083 ********* 2025-05-14 02:15:34.336241 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:15:34.336681 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:15:34.337805 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:15:34.339127 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:15:34.339886 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:15:34.342174 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:15:34.342696 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:15:34.344166 | orchestrator | 2025-05-14 02:15:34.344876 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-14 02:15:34.346573 | orchestrator | Wednesday 14 May 2025 02:15:34 +0000 (0:00:01.884) 0:07:36.967 ********* 2025-05-14 02:15:35.161918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:15:35.162784 | orchestrator | 2025-05-14 02:15:35.163237 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-14 02:15:35.164294 | orchestrator | Wednesday 14 May 2025 02:15:35 +0000 (0:00:00.826) 0:07:37.794 ********* 2025-05-14 02:15:43.756516 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:43.757968 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:43.758837 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:43.761902 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:43.762602 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:43.763244 | orchestrator | changed: [testbed-manager] 2025-05-14 02:15:43.764527 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:43.765807 | orchestrator | 2025-05-14 02:15:43.766980 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-14 02:15:43.768118 | orchestrator | Wednesday 14 May 2025 02:15:43 +0000 (0:00:08.592) 0:07:46.386 ********* 2025-05-14 02:15:45.731331 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:45.736238 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:45.736885 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:45.737190 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:45.738091 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:45.738935 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:45.740809 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:45.742780 | orchestrator | 2025-05-14 02:15:45.743907 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-14 02:15:45.744877 | orchestrator | Wednesday 14 May 2025 02:15:45 +0000 (0:00:01.975) 0:07:48.362 ********* 2025-05-14 02:15:46.996194 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:46.996758 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:46.999631 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:46.999659 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:47.000433 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:47.001092 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:47.001917 | orchestrator | 2025-05-14 02:15:47.002889 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-14 02:15:47.003599 | orchestrator | Wednesday 14 May 2025 02:15:46 +0000 (0:00:01.267) 0:07:49.630 ********* 2025-05-14 02:15:48.387594 | orchestrator | changed: [testbed-manager] 2025-05-14 02:15:48.387986 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:48.389015 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:48.390786 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:48.391134 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:48.392801 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:48.393546 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:48.394678 | orchestrator | 2025-05-14 02:15:48.395051 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-14 02:15:48.396000 | orchestrator | 2025-05-14 02:15:48.396658 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-14 02:15:48.397149 | orchestrator | Wednesday 14 May 2025 02:15:48 +0000 (0:00:01.389) 0:07:51.019 ********* 2025-05-14 02:15:48.504885 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:48.583008 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:48.641523 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:48.701240 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:48.782566 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:48.904411 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:48.905013 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:48.906103 | orchestrator | 2025-05-14 02:15:48.906518 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-14 02:15:48.910923 | orchestrator | 2025-05-14 02:15:48.910991 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-14 02:15:48.912173 | orchestrator | Wednesday 14 May 2025 02:15:48 +0000 (0:00:00.517) 0:07:51.537 ********* 2025-05-14 02:15:50.230206 | orchestrator | changed: [testbed-manager] 2025-05-14 02:15:50.230342 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:50.230920 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:50.231363 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:50.232345 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:50.236129 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:50.236153 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:50.236165 | orchestrator | 2025-05-14 02:15:50.236179 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-14 02:15:50.236419 | orchestrator | Wednesday 14 May 2025 02:15:50 +0000 (0:00:01.323) 0:07:52.861 ********* 2025-05-14 02:15:51.634592 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:51.634878 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:51.636278 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:51.639489 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:51.639546 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:51.639559 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:51.641757 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:51.642609 | orchestrator | 2025-05-14 02:15:51.643491 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-14 02:15:51.644420 | orchestrator | Wednesday 14 May 2025 02:15:51 +0000 (0:00:01.407) 0:07:54.268 ********* 2025-05-14 02:15:51.772395 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:51.843991 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:51.913372 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:52.128972 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:52.191074 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:52.608692 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:52.610217 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:52.614388 | orchestrator | 2025-05-14 02:15:52.614433 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-14 02:15:52.614448 | orchestrator | Wednesday 14 May 2025 02:15:52 +0000 (0:00:00.972) 0:07:55.241 ********* 2025-05-14 02:15:53.815303 | orchestrator | changed: [testbed-manager] 2025-05-14 02:15:53.816220 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:53.819137 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:53.819176 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:53.819187 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:53.820771 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:53.820867 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:53.821686 | orchestrator | 2025-05-14 02:15:53.822587 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-14 02:15:53.823397 | orchestrator | 2025-05-14 02:15:53.824230 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-14 02:15:53.825113 | orchestrator | Wednesday 14 May 2025 02:15:53 +0000 (0:00:01.207) 0:07:56.448 ********* 2025-05-14 02:15:54.590539 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:15:54.591393 | orchestrator | 2025-05-14 02:15:54.595065 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-14 02:15:54.595110 | orchestrator | Wednesday 14 May 2025 02:15:54 +0000 (0:00:00.775) 0:07:57.224 ********* 2025-05-14 02:15:55.067756 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:55.155253 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:55.634468 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:55.636860 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:55.638872 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:55.638936 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:55.639757 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:55.641025 | orchestrator | 2025-05-14 02:15:55.642656 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-14 02:15:55.643952 | orchestrator | Wednesday 14 May 2025 02:15:55 +0000 (0:00:01.041) 0:07:58.265 ********* 2025-05-14 02:15:56.802891 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:56.803468 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:56.804449 | orchestrator | changed: [testbed-manager] 2025-05-14 02:15:56.808222 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:56.808251 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:56.808263 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:56.808274 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:56.808287 | orchestrator | 2025-05-14 02:15:56.808389 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-14 02:15:56.809870 | orchestrator | Wednesday 14 May 2025 02:15:56 +0000 (0:00:01.171) 0:07:59.436 ********* 2025-05-14 02:15:57.826382 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:15:57.827262 | orchestrator | 2025-05-14 02:15:57.830960 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-14 02:15:57.831678 | orchestrator | Wednesday 14 May 2025 02:15:57 +0000 (0:00:01.020) 0:08:00.457 ********* 2025-05-14 02:15:58.655114 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:58.656929 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:58.657374 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:58.657664 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:58.658700 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:58.659611 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:58.660276 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:58.661937 | orchestrator | 2025-05-14 02:15:58.662081 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-14 02:15:58.663174 | orchestrator | Wednesday 14 May 2025 02:15:58 +0000 (0:00:00.828) 0:08:01.285 ********* 2025-05-14 02:15:59.743581 | orchestrator | changed: [testbed-manager] 2025-05-14 02:15:59.744505 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:59.746936 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:59.747878 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:59.748832 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:59.750609 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:59.751771 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:59.753141 | orchestrator | 2025-05-14 02:15:59.754466 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:15:59.755292 | orchestrator | 2025-05-14 02:15:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:15:59.755318 | orchestrator | 2025-05-14 02:15:59 | INFO  | Please wait and do not abort execution. 2025-05-14 02:15:59.756296 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-14 02:15:59.757828 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 02:15:59.758583 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 02:15:59.759978 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 02:15:59.760261 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-14 02:15:59.761800 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 02:15:59.762424 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 02:15:59.763030 | orchestrator | 2025-05-14 02:15:59.763790 | orchestrator | Wednesday 14 May 2025 02:15:59 +0000 (0:00:01.093) 0:08:02.378 ********* 2025-05-14 02:15:59.764646 | orchestrator | =============================================================================== 2025-05-14 02:15:59.765225 | orchestrator | osism.commons.packages : Install required packages --------------------- 84.55s 2025-05-14 02:15:59.766142 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.83s 2025-05-14 02:15:59.766828 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.86s 2025-05-14 02:15:59.767762 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.44s 2025-05-14 02:15:59.768107 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.74s 2025-05-14 02:15:59.768997 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.56s 2025-05-14 02:15:59.769312 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.10s 2025-05-14 02:15:59.770135 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.14s 2025-05-14 02:15:59.770836 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.60s 2025-05-14 02:15:59.771202 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.73s 2025-05-14 02:15:59.771965 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.59s 2025-05-14 02:15:59.772922 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.14s 2025-05-14 02:15:59.773126 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.01s 2025-05-14 02:15:59.773848 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.97s 2025-05-14 02:15:59.774326 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.81s 2025-05-14 02:15:59.774819 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.59s 2025-05-14 02:15:59.775442 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.30s 2025-05-14 02:15:59.776078 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.93s 2025-05-14 02:15:59.776691 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.89s 2025-05-14 02:15:59.777064 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.87s 2025-05-14 02:16:00.500205 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-14 02:16:00.500356 | orchestrator | + osism apply network 2025-05-14 02:16:02.509087 | orchestrator | 2025-05-14 02:16:02 | INFO  | Task 552f9357-9f0d-48a0-b624-080f59fc7c7c (network) was prepared for execution. 2025-05-14 02:16:02.509193 | orchestrator | 2025-05-14 02:16:02 | INFO  | It takes a moment until task 552f9357-9f0d-48a0-b624-080f59fc7c7c (network) has been started and output is visible here. 2025-05-14 02:16:05.962862 | orchestrator | 2025-05-14 02:16:05.966144 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-14 02:16:05.966191 | orchestrator | 2025-05-14 02:16:05.966205 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-14 02:16:05.967902 | orchestrator | Wednesday 14 May 2025 02:16:05 +0000 (0:00:00.205) 0:00:00.205 ********* 2025-05-14 02:16:06.116837 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:06.196648 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:06.284540 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:06.364485 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:06.440642 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:06.686975 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:06.687138 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:06.688128 | orchestrator | 2025-05-14 02:16:06.688851 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-14 02:16:06.689447 | orchestrator | Wednesday 14 May 2025 02:16:06 +0000 (0:00:00.723) 0:00:00.929 ********* 2025-05-14 02:16:07.867510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:16:07.868278 | orchestrator | 2025-05-14 02:16:07.871141 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-14 02:16:07.871234 | orchestrator | Wednesday 14 May 2025 02:16:07 +0000 (0:00:01.178) 0:00:02.107 ********* 2025-05-14 02:16:09.723216 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:09.723891 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:09.725255 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:09.726297 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:09.728386 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:09.728745 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:09.729528 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:09.730433 | orchestrator | 2025-05-14 02:16:09.731262 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-14 02:16:09.732170 | orchestrator | Wednesday 14 May 2025 02:16:09 +0000 (0:00:01.855) 0:00:03.963 ********* 2025-05-14 02:16:11.464193 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:11.465561 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:11.466943 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:11.467943 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:11.468945 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:11.469858 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:11.471380 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:11.472293 | orchestrator | 2025-05-14 02:16:11.473237 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-14 02:16:11.474204 | orchestrator | Wednesday 14 May 2025 02:16:11 +0000 (0:00:01.741) 0:00:05.705 ********* 2025-05-14 02:16:11.965565 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-14 02:16:11.965778 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-14 02:16:12.590883 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-14 02:16:12.592341 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-14 02:16:12.592369 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-14 02:16:12.597343 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-14 02:16:12.597388 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-14 02:16:12.597403 | orchestrator | 2025-05-14 02:16:12.597900 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-14 02:16:12.598382 | orchestrator | Wednesday 14 May 2025 02:16:12 +0000 (0:00:01.127) 0:00:06.832 ********* 2025-05-14 02:16:14.305548 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:16:14.308052 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 02:16:14.310360 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:16:14.312061 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 02:16:14.312690 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:16:14.313428 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 02:16:14.315051 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 02:16:14.316328 | orchestrator | 2025-05-14 02:16:14.317352 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-14 02:16:14.318543 | orchestrator | Wednesday 14 May 2025 02:16:14 +0000 (0:00:01.715) 0:00:08.548 ********* 2025-05-14 02:16:15.989559 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:15.989664 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:15.991167 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:15.991603 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:15.992488 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:15.993191 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:15.993433 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:15.994225 | orchestrator | 2025-05-14 02:16:15.994512 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-14 02:16:15.995371 | orchestrator | Wednesday 14 May 2025 02:16:15 +0000 (0:00:01.678) 0:00:10.227 ********* 2025-05-14 02:16:16.576020 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:16:17.003034 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:16:17.006149 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 02:16:17.006282 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 02:16:17.007198 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:16:17.008855 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 02:16:17.009877 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 02:16:17.010502 | orchestrator | 2025-05-14 02:16:17.011341 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-14 02:16:17.011897 | orchestrator | Wednesday 14 May 2025 02:16:16 +0000 (0:00:01.021) 0:00:11.248 ********* 2025-05-14 02:16:17.448679 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:17.621913 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:18.135466 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:18.136426 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:18.137591 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:18.141799 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:18.142502 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:18.143089 | orchestrator | 2025-05-14 02:16:18.144393 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-14 02:16:18.145539 | orchestrator | Wednesday 14 May 2025 02:16:18 +0000 (0:00:01.127) 0:00:12.376 ********* 2025-05-14 02:16:18.295351 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:18.376950 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:18.456997 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:18.542869 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:18.625051 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:18.933620 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:18.934620 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:18.935593 | orchestrator | 2025-05-14 02:16:18.938330 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-14 02:16:18.938423 | orchestrator | Wednesday 14 May 2025 02:16:18 +0000 (0:00:00.798) 0:00:13.175 ********* 2025-05-14 02:16:20.922220 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:20.922786 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:20.922819 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:20.924883 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:20.925889 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:20.927001 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:20.928062 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:20.928475 | orchestrator | 2025-05-14 02:16:20.929114 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-14 02:16:20.929933 | orchestrator | Wednesday 14 May 2025 02:16:20 +0000 (0:00:01.988) 0:00:15.164 ********* 2025-05-14 02:16:22.582772 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-14 02:16:22.586481 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:16:22.586539 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:16:22.586551 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:16:22.586562 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:16:22.586572 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:16:22.586609 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:16:22.587221 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:16:22.587932 | orchestrator | 2025-05-14 02:16:22.588554 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-14 02:16:22.588980 | orchestrator | Wednesday 14 May 2025 02:16:22 +0000 (0:00:01.660) 0:00:16.824 ********* 2025-05-14 02:16:23.938531 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:23.939365 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:23.939409 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:23.940175 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:23.940933 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:23.942633 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:23.944382 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:23.944952 | orchestrator | 2025-05-14 02:16:23.946137 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-14 02:16:23.948818 | orchestrator | Wednesday 14 May 2025 02:16:23 +0000 (0:00:01.353) 0:00:18.178 ********* 2025-05-14 02:16:25.366169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:16:25.368210 | orchestrator | 2025-05-14 02:16:25.369006 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-14 02:16:25.371416 | orchestrator | Wednesday 14 May 2025 02:16:25 +0000 (0:00:01.428) 0:00:19.606 ********* 2025-05-14 02:16:26.332449 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:26.332923 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:26.333574 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:26.334358 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:26.334676 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:26.334936 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:26.335308 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:26.339031 | orchestrator | 2025-05-14 02:16:26.339124 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-14 02:16:26.339553 | orchestrator | Wednesday 14 May 2025 02:16:26 +0000 (0:00:00.971) 0:00:20.578 ********* 2025-05-14 02:16:26.493016 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:26.578600 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:26.819898 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:26.915634 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:26.999407 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:27.155870 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:27.156095 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:27.156128 | orchestrator | 2025-05-14 02:16:27.157052 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-14 02:16:27.157828 | orchestrator | Wednesday 14 May 2025 02:16:27 +0000 (0:00:00.818) 0:00:21.397 ********* 2025-05-14 02:16:27.583350 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:16:27.584070 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:16:27.670474 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:16:27.670647 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:16:28.132486 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:16:28.132592 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:16:28.133011 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:16:28.133258 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:16:28.134318 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:16:28.137973 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:16:28.138555 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:16:28.139128 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:16:28.139774 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:16:28.140252 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:16:28.140794 | orchestrator | 2025-05-14 02:16:28.141320 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-14 02:16:28.142207 | orchestrator | Wednesday 14 May 2025 02:16:28 +0000 (0:00:00.979) 0:00:22.377 ********* 2025-05-14 02:16:28.461084 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:28.549036 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:28.642146 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:28.724243 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:28.804639 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:29.949215 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:29.949448 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:29.954011 | orchestrator | 2025-05-14 02:16:29.954169 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-14 02:16:29.954894 | orchestrator | Wednesday 14 May 2025 02:16:29 +0000 (0:00:01.812) 0:00:24.189 ********* 2025-05-14 02:16:30.104575 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:30.185037 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:30.440392 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:30.525337 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:30.604899 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:30.647111 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:30.647208 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:30.647595 | orchestrator | 2025-05-14 02:16:30.648805 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:16:30.648879 | orchestrator | 2025-05-14 02:16:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:16:30.648903 | orchestrator | 2025-05-14 02:16:30 | INFO  | Please wait and do not abort execution. 2025-05-14 02:16:30.649410 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:16:30.649902 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:16:30.650781 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:16:30.651468 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:16:30.651816 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:16:30.652514 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:16:30.653215 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:16:30.653675 | orchestrator | 2025-05-14 02:16:30.654881 | orchestrator | Wednesday 14 May 2025 02:16:30 +0000 (0:00:00.702) 0:00:24.891 ********* 2025-05-14 02:16:30.655313 | orchestrator | =============================================================================== 2025-05-14 02:16:30.655533 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.99s 2025-05-14 02:16:30.656242 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.86s 2025-05-14 02:16:30.656466 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.81s 2025-05-14 02:16:30.656873 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.74s 2025-05-14 02:16:30.657269 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.72s 2025-05-14 02:16:30.657632 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.68s 2025-05-14 02:16:30.658107 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.66s 2025-05-14 02:16:30.658450 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.43s 2025-05-14 02:16:30.658939 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.35s 2025-05-14 02:16:30.659337 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2025-05-14 02:16:30.659800 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2025-05-14 02:16:30.660018 | orchestrator | osism.commons.network : Create required directories --------------------- 1.13s 2025-05-14 02:16:30.660297 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.02s 2025-05-14 02:16:30.660857 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 0.98s 2025-05-14 02:16:30.661139 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-05-14 02:16:30.661565 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.82s 2025-05-14 02:16:30.661931 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.80s 2025-05-14 02:16:30.662266 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.72s 2025-05-14 02:16:30.662673 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.70s 2025-05-14 02:16:31.202179 | orchestrator | + osism apply wireguard 2025-05-14 02:16:32.626640 | orchestrator | 2025-05-14 02:16:32 | INFO  | Task 8a2bccf1-4e55-4c04-b5cc-6d40b1c641b2 (wireguard) was prepared for execution. 2025-05-14 02:16:32.626758 | orchestrator | 2025-05-14 02:16:32 | INFO  | It takes a moment until task 8a2bccf1-4e55-4c04-b5cc-6d40b1c641b2 (wireguard) has been started and output is visible here. 2025-05-14 02:16:35.785139 | orchestrator | 2025-05-14 02:16:35.786228 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-14 02:16:35.786260 | orchestrator | 2025-05-14 02:16:35.786964 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-14 02:16:35.789861 | orchestrator | Wednesday 14 May 2025 02:16:35 +0000 (0:00:00.163) 0:00:00.163 ********* 2025-05-14 02:16:37.329968 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:37.331248 | orchestrator | 2025-05-14 02:16:37.331281 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-14 02:16:37.331604 | orchestrator | Wednesday 14 May 2025 02:16:37 +0000 (0:00:01.546) 0:00:01.710 ********* 2025-05-14 02:16:43.408839 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:43.409435 | orchestrator | 2025-05-14 02:16:43.409758 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-14 02:16:43.411797 | orchestrator | Wednesday 14 May 2025 02:16:43 +0000 (0:00:06.079) 0:00:07.789 ********* 2025-05-14 02:16:43.928272 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:43.928378 | orchestrator | 2025-05-14 02:16:43.928424 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-14 02:16:43.929367 | orchestrator | Wednesday 14 May 2025 02:16:43 +0000 (0:00:00.519) 0:00:08.308 ********* 2025-05-14 02:16:44.364796 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:44.365449 | orchestrator | 2025-05-14 02:16:44.366285 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-14 02:16:44.367389 | orchestrator | Wednesday 14 May 2025 02:16:44 +0000 (0:00:00.438) 0:00:08.746 ********* 2025-05-14 02:16:44.887486 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:44.888584 | orchestrator | 2025-05-14 02:16:44.889534 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-14 02:16:44.890224 | orchestrator | Wednesday 14 May 2025 02:16:44 +0000 (0:00:00.521) 0:00:09.267 ********* 2025-05-14 02:16:45.490425 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:45.490928 | orchestrator | 2025-05-14 02:16:45.493203 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-14 02:16:45.493298 | orchestrator | Wednesday 14 May 2025 02:16:45 +0000 (0:00:00.602) 0:00:09.870 ********* 2025-05-14 02:16:45.941638 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:45.942561 | orchestrator | 2025-05-14 02:16:45.944026 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-14 02:16:45.944064 | orchestrator | Wednesday 14 May 2025 02:16:45 +0000 (0:00:00.451) 0:00:10.322 ********* 2025-05-14 02:16:47.162413 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:47.162636 | orchestrator | 2025-05-14 02:16:47.163333 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-14 02:16:47.163867 | orchestrator | Wednesday 14 May 2025 02:16:47 +0000 (0:00:01.220) 0:00:11.542 ********* 2025-05-14 02:16:48.029638 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 02:16:48.030127 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:48.030626 | orchestrator | 2025-05-14 02:16:48.031018 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-14 02:16:48.031816 | orchestrator | Wednesday 14 May 2025 02:16:48 +0000 (0:00:00.866) 0:00:12.409 ********* 2025-05-14 02:16:49.746548 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:49.747841 | orchestrator | 2025-05-14 02:16:49.748562 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-14 02:16:49.750104 | orchestrator | Wednesday 14 May 2025 02:16:49 +0000 (0:00:01.716) 0:00:14.126 ********* 2025-05-14 02:16:50.650345 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:50.650447 | orchestrator | 2025-05-14 02:16:50.651439 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:16:50.652055 | orchestrator | 2025-05-14 02:16:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:16:50.652081 | orchestrator | 2025-05-14 02:16:50 | INFO  | Please wait and do not abort execution. 2025-05-14 02:16:50.652238 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:16:50.652596 | orchestrator | 2025-05-14 02:16:50.652905 | orchestrator | Wednesday 14 May 2025 02:16:50 +0000 (0:00:00.905) 0:00:15.031 ********* 2025-05-14 02:16:50.653263 | orchestrator | =============================================================================== 2025-05-14 02:16:50.653555 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.08s 2025-05-14 02:16:50.653949 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.72s 2025-05-14 02:16:50.654240 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.55s 2025-05-14 02:16:50.654978 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2025-05-14 02:16:50.655115 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2025-05-14 02:16:50.655135 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.87s 2025-05-14 02:16:50.655709 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.60s 2025-05-14 02:16:50.655779 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-05-14 02:16:50.655946 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.52s 2025-05-14 02:16:50.656306 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2025-05-14 02:16:50.656807 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2025-05-14 02:16:51.223763 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-14 02:16:51.265271 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-14 02:16:51.265336 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-14 02:16:51.338327 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 204 0 --:--:-- --:--:-- --:--:-- 205 2025-05-14 02:16:51.351848 | orchestrator | + osism apply --environment custom workarounds 2025-05-14 02:16:52.747554 | orchestrator | 2025-05-14 02:16:52 | INFO  | Trying to run play workarounds in environment custom 2025-05-14 02:16:52.794319 | orchestrator | 2025-05-14 02:16:52 | INFO  | Task 2f3cc31f-9b36-4fdf-8e16-54eb5662a7e4 (workarounds) was prepared for execution. 2025-05-14 02:16:52.794414 | orchestrator | 2025-05-14 02:16:52 | INFO  | It takes a moment until task 2f3cc31f-9b36-4fdf-8e16-54eb5662a7e4 (workarounds) has been started and output is visible here. 2025-05-14 02:16:55.949678 | orchestrator | 2025-05-14 02:16:55.950695 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:16:55.954270 | orchestrator | 2025-05-14 02:16:55.955074 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-14 02:16:55.956787 | orchestrator | Wednesday 14 May 2025 02:16:55 +0000 (0:00:00.142) 0:00:00.142 ********* 2025-05-14 02:16:56.115025 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-14 02:16:56.196539 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-14 02:16:56.286346 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-14 02:16:56.364406 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-14 02:16:56.449897 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-14 02:16:56.703513 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-14 02:16:56.704153 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-14 02:16:56.705488 | orchestrator | 2025-05-14 02:16:56.706416 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-14 02:16:56.707631 | orchestrator | 2025-05-14 02:16:56.708354 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-14 02:16:56.708812 | orchestrator | Wednesday 14 May 2025 02:16:56 +0000 (0:00:00.755) 0:00:00.898 ********* 2025-05-14 02:16:59.417880 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:59.418226 | orchestrator | 2025-05-14 02:16:59.423122 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-14 02:16:59.424312 | orchestrator | 2025-05-14 02:16:59.426177 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-14 02:16:59.427468 | orchestrator | Wednesday 14 May 2025 02:16:59 +0000 (0:00:02.711) 0:00:03.609 ********* 2025-05-14 02:17:01.270399 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:01.272922 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:01.272969 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:01.273270 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:01.277958 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:01.277984 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:01.278985 | orchestrator | 2025-05-14 02:17:01.279010 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-14 02:17:01.279024 | orchestrator | 2025-05-14 02:17:01.279756 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-14 02:17:01.280378 | orchestrator | Wednesday 14 May 2025 02:17:01 +0000 (0:00:01.853) 0:00:05.463 ********* 2025-05-14 02:17:02.612077 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:17:02.612182 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:17:02.612588 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:17:02.614615 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:17:02.615895 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:17:02.616321 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:17:02.617503 | orchestrator | 2025-05-14 02:17:02.618318 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-14 02:17:02.618902 | orchestrator | Wednesday 14 May 2025 02:17:02 +0000 (0:00:01.340) 0:00:06.803 ********* 2025-05-14 02:17:06.320982 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:17:06.321674 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:17:06.322349 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:17:06.323341 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:17:06.325572 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:17:06.326526 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:17:06.327537 | orchestrator | 2025-05-14 02:17:06.328028 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-14 02:17:06.328678 | orchestrator | Wednesday 14 May 2025 02:17:06 +0000 (0:00:03.710) 0:00:10.514 ********* 2025-05-14 02:17:06.473539 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:17:06.547489 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:17:06.624779 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:17:06.877384 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:17:07.028283 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:17:07.028942 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:17:07.029033 | orchestrator | 2025-05-14 02:17:07.032973 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-14 02:17:07.033005 | orchestrator | 2025-05-14 02:17:07.033038 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-14 02:17:07.033050 | orchestrator | Wednesday 14 May 2025 02:17:07 +0000 (0:00:00.706) 0:00:11.220 ********* 2025-05-14 02:17:08.761744 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:08.761883 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:17:08.761966 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:17:08.762941 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:17:08.763373 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:17:08.764992 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:17:08.765410 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:17:08.767186 | orchestrator | 2025-05-14 02:17:08.767566 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-14 02:17:08.768078 | orchestrator | Wednesday 14 May 2025 02:17:08 +0000 (0:00:01.734) 0:00:12.955 ********* 2025-05-14 02:17:10.440521 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:10.440631 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:17:10.440705 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:17:10.443304 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:17:10.444318 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:17:10.445129 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:17:10.448194 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:17:10.448467 | orchestrator | 2025-05-14 02:17:10.449574 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-14 02:17:10.450870 | orchestrator | Wednesday 14 May 2025 02:17:10 +0000 (0:00:01.672) 0:00:14.628 ********* 2025-05-14 02:17:11.948097 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:11.952134 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:11.952167 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:11.952177 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:11.953607 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:11.954912 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:11.955867 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:11.956234 | orchestrator | 2025-05-14 02:17:11.956641 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-14 02:17:11.956909 | orchestrator | Wednesday 14 May 2025 02:17:11 +0000 (0:00:01.513) 0:00:16.141 ********* 2025-05-14 02:17:13.761263 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:13.761443 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:17:13.765075 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:17:13.765100 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:17:13.765117 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:17:13.765360 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:17:13.766736 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:17:13.767522 | orchestrator | 2025-05-14 02:17:13.768669 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-14 02:17:13.769273 | orchestrator | Wednesday 14 May 2025 02:17:13 +0000 (0:00:01.813) 0:00:17.955 ********* 2025-05-14 02:17:13.918909 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:17:13.996360 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:17:14.080291 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:17:14.148410 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:17:14.398316 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:17:14.541851 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:17:14.544156 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:17:14.544226 | orchestrator | 2025-05-14 02:17:14.544291 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-14 02:17:14.545880 | orchestrator | 2025-05-14 02:17:14.546695 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-14 02:17:14.547979 | orchestrator | Wednesday 14 May 2025 02:17:14 +0000 (0:00:00.778) 0:00:18.733 ********* 2025-05-14 02:17:16.935167 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:16.935383 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:16.936207 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:16.937324 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:16.938477 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:16.939458 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:16.940094 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:16.941262 | orchestrator | 2025-05-14 02:17:16.942222 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:17:16.942487 | orchestrator | 2025-05-14 02:17:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:17:16.942770 | orchestrator | 2025-05-14 02:17:16 | INFO  | Please wait and do not abort execution. 2025-05-14 02:17:16.943688 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:17:16.944656 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:16.945195 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:16.945921 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:16.946280 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:16.947110 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:16.947693 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:16.948012 | orchestrator | 2025-05-14 02:17:16.948588 | orchestrator | Wednesday 14 May 2025 02:17:16 +0000 (0:00:02.395) 0:00:21.129 ********* 2025-05-14 02:17:16.948901 | orchestrator | =============================================================================== 2025-05-14 02:17:16.949413 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.71s 2025-05-14 02:17:16.949680 | orchestrator | Apply netplan configuration --------------------------------------------- 2.71s 2025-05-14 02:17:16.950170 | orchestrator | Install python3-docker -------------------------------------------------- 2.40s 2025-05-14 02:17:16.950623 | orchestrator | Apply netplan configuration --------------------------------------------- 1.85s 2025-05-14 02:17:16.950954 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.81s 2025-05-14 02:17:16.951475 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.73s 2025-05-14 02:17:16.951898 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.67s 2025-05-14 02:17:16.952510 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.51s 2025-05-14 02:17:16.952848 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.34s 2025-05-14 02:17:16.953060 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.78s 2025-05-14 02:17:16.953546 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2025-05-14 02:17:16.953854 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2025-05-14 02:17:17.529308 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-14 02:17:18.964208 | orchestrator | 2025-05-14 02:17:18 | INFO  | Task cdf65d0b-344c-405f-b80a-6358c44d08e9 (reboot) was prepared for execution. 2025-05-14 02:17:18.964308 | orchestrator | 2025-05-14 02:17:18 | INFO  | It takes a moment until task cdf65d0b-344c-405f-b80a-6358c44d08e9 (reboot) has been started and output is visible here. 2025-05-14 02:17:22.190638 | orchestrator | 2025-05-14 02:17:22.191575 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:17:22.191618 | orchestrator | 2025-05-14 02:17:22.191646 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:17:22.191964 | orchestrator | Wednesday 14 May 2025 02:17:22 +0000 (0:00:00.181) 0:00:00.181 ********* 2025-05-14 02:17:22.297320 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:17:22.297900 | orchestrator | 2025-05-14 02:17:22.301035 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:17:22.301835 | orchestrator | Wednesday 14 May 2025 02:17:22 +0000 (0:00:00.112) 0:00:00.293 ********* 2025-05-14 02:17:23.229600 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:17:23.230667 | orchestrator | 2025-05-14 02:17:23.231878 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:17:23.232896 | orchestrator | Wednesday 14 May 2025 02:17:23 +0000 (0:00:00.932) 0:00:01.226 ********* 2025-05-14 02:17:23.330109 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:17:23.330262 | orchestrator | 2025-05-14 02:17:23.330886 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:17:23.331150 | orchestrator | 2025-05-14 02:17:23.332666 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:17:23.333269 | orchestrator | Wednesday 14 May 2025 02:17:23 +0000 (0:00:00.096) 0:00:01.323 ********* 2025-05-14 02:17:23.428053 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:17:23.428231 | orchestrator | 2025-05-14 02:17:23.428654 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:17:23.429485 | orchestrator | Wednesday 14 May 2025 02:17:23 +0000 (0:00:00.101) 0:00:01.424 ********* 2025-05-14 02:17:24.102524 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:17:24.103673 | orchestrator | 2025-05-14 02:17:24.104004 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:17:24.104523 | orchestrator | Wednesday 14 May 2025 02:17:24 +0000 (0:00:00.674) 0:00:02.099 ********* 2025-05-14 02:17:24.209036 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:17:24.210916 | orchestrator | 2025-05-14 02:17:24.212004 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:17:24.212988 | orchestrator | 2025-05-14 02:17:24.213287 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:17:24.213979 | orchestrator | Wednesday 14 May 2025 02:17:24 +0000 (0:00:00.102) 0:00:02.201 ********* 2025-05-14 02:17:24.294500 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:17:24.294593 | orchestrator | 2025-05-14 02:17:24.294606 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:17:24.294667 | orchestrator | Wednesday 14 May 2025 02:17:24 +0000 (0:00:00.089) 0:00:02.291 ********* 2025-05-14 02:17:25.079676 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:17:25.079908 | orchestrator | 2025-05-14 02:17:25.080082 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:17:25.081094 | orchestrator | Wednesday 14 May 2025 02:17:25 +0000 (0:00:00.785) 0:00:03.076 ********* 2025-05-14 02:17:25.203580 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:17:25.206816 | orchestrator | 2025-05-14 02:17:25.207543 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:17:25.208324 | orchestrator | 2025-05-14 02:17:25.208998 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:17:25.210818 | orchestrator | Wednesday 14 May 2025 02:17:25 +0000 (0:00:00.121) 0:00:03.197 ********* 2025-05-14 02:17:25.290966 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:17:25.291087 | orchestrator | 2025-05-14 02:17:25.291168 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:17:25.291185 | orchestrator | Wednesday 14 May 2025 02:17:25 +0000 (0:00:00.090) 0:00:03.288 ********* 2025-05-14 02:17:25.942258 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:17:25.942353 | orchestrator | 2025-05-14 02:17:25.945060 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:17:25.945259 | orchestrator | Wednesday 14 May 2025 02:17:25 +0000 (0:00:00.648) 0:00:03.937 ********* 2025-05-14 02:17:26.053246 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:17:26.054521 | orchestrator | 2025-05-14 02:17:26.056906 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:17:26.057507 | orchestrator | 2025-05-14 02:17:26.057988 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:17:26.058383 | orchestrator | Wednesday 14 May 2025 02:17:26 +0000 (0:00:00.110) 0:00:04.047 ********* 2025-05-14 02:17:26.158781 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:17:26.159075 | orchestrator | 2025-05-14 02:17:26.159368 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:17:26.159837 | orchestrator | Wednesday 14 May 2025 02:17:26 +0000 (0:00:00.106) 0:00:04.154 ********* 2025-05-14 02:17:26.866385 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:17:26.867182 | orchestrator | 2025-05-14 02:17:26.867760 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:17:26.869556 | orchestrator | Wednesday 14 May 2025 02:17:26 +0000 (0:00:00.706) 0:00:04.860 ********* 2025-05-14 02:17:26.975696 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:17:26.976519 | orchestrator | 2025-05-14 02:17:26.977900 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:17:26.978841 | orchestrator | 2025-05-14 02:17:26.979834 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:17:26.981360 | orchestrator | Wednesday 14 May 2025 02:17:26 +0000 (0:00:00.109) 0:00:04.969 ********* 2025-05-14 02:17:27.082374 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:17:27.083526 | orchestrator | 2025-05-14 02:17:27.084360 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:17:27.086777 | orchestrator | Wednesday 14 May 2025 02:17:27 +0000 (0:00:00.108) 0:00:05.078 ********* 2025-05-14 02:17:27.762239 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:17:27.762375 | orchestrator | 2025-05-14 02:17:27.762392 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:17:27.763053 | orchestrator | Wednesday 14 May 2025 02:17:27 +0000 (0:00:00.678) 0:00:05.757 ********* 2025-05-14 02:17:27.795194 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:17:27.795396 | orchestrator | 2025-05-14 02:17:27.796291 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:17:27.796939 | orchestrator | 2025-05-14 02:17:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:17:27.797006 | orchestrator | 2025-05-14 02:17:27 | INFO  | Please wait and do not abort execution. 2025-05-14 02:17:27.798328 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:27.799041 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:27.799836 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:27.799857 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:27.800220 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:27.800241 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:17:27.800657 | orchestrator | 2025-05-14 02:17:27.800949 | orchestrator | Wednesday 14 May 2025 02:17:27 +0000 (0:00:00.035) 0:00:05.793 ********* 2025-05-14 02:17:27.801443 | orchestrator | =============================================================================== 2025-05-14 02:17:27.801903 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.43s 2025-05-14 02:17:27.802204 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.61s 2025-05-14 02:17:27.802662 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2025-05-14 02:17:28.319523 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-14 02:17:29.791807 | orchestrator | 2025-05-14 02:17:29 | INFO  | Task cbaf74a3-5d83-401a-bf8d-3b6426d5979b (wait-for-connection) was prepared for execution. 2025-05-14 02:17:29.791909 | orchestrator | 2025-05-14 02:17:29 | INFO  | It takes a moment until task cbaf74a3-5d83-401a-bf8d-3b6426d5979b (wait-for-connection) has been started and output is visible here. 2025-05-14 02:17:32.926554 | orchestrator | 2025-05-14 02:17:32.927474 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-14 02:17:32.927635 | orchestrator | 2025-05-14 02:17:32.928340 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-14 02:17:32.928968 | orchestrator | Wednesday 14 May 2025 02:17:32 +0000 (0:00:00.173) 0:00:00.173 ********* 2025-05-14 02:17:45.563897 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:45.564019 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:45.565112 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:45.566942 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:45.567441 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:45.568227 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:45.569107 | orchestrator | 2025-05-14 02:17:45.570216 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:17:45.570313 | orchestrator | 2025-05-14 02:17:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:17:45.570337 | orchestrator | 2025-05-14 02:17:45 | INFO  | Please wait and do not abort execution. 2025-05-14 02:17:45.570665 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:17:45.571359 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:17:45.572458 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:17:45.573218 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:17:45.574757 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:17:45.576019 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:17:45.576372 | orchestrator | 2025-05-14 02:17:45.577254 | orchestrator | Wednesday 14 May 2025 02:17:45 +0000 (0:00:12.637) 0:00:12.811 ********* 2025-05-14 02:17:45.578111 | orchestrator | =============================================================================== 2025-05-14 02:17:45.578516 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.64s 2025-05-14 02:17:46.113596 | orchestrator | + osism apply hddtemp 2025-05-14 02:17:47.619554 | orchestrator | 2025-05-14 02:17:47 | INFO  | Task ee5820db-7880-4276-94da-8ffb4b479175 (hddtemp) was prepared for execution. 2025-05-14 02:17:47.619657 | orchestrator | 2025-05-14 02:17:47 | INFO  | It takes a moment until task ee5820db-7880-4276-94da-8ffb4b479175 (hddtemp) has been started and output is visible here. 2025-05-14 02:17:50.866519 | orchestrator | 2025-05-14 02:17:50.867108 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-14 02:17:50.867361 | orchestrator | 2025-05-14 02:17:50.872076 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-14 02:17:50.873608 | orchestrator | Wednesday 14 May 2025 02:17:50 +0000 (0:00:00.199) 0:00:00.199 ********* 2025-05-14 02:17:51.016176 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:51.093691 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:51.169856 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:51.251522 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:51.333211 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:51.569630 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:51.569894 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:51.570836 | orchestrator | 2025-05-14 02:17:51.571308 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-14 02:17:51.571828 | orchestrator | Wednesday 14 May 2025 02:17:51 +0000 (0:00:00.703) 0:00:00.902 ********* 2025-05-14 02:17:52.757476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:17:52.758191 | orchestrator | 2025-05-14 02:17:52.761616 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-14 02:17:52.761743 | orchestrator | Wednesday 14 May 2025 02:17:52 +0000 (0:00:01.185) 0:00:02.088 ********* 2025-05-14 02:17:54.646873 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:54.647935 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:54.652172 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:54.652254 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:54.652810 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:54.653652 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:54.654203 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:54.654909 | orchestrator | 2025-05-14 02:17:54.655458 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-14 02:17:54.656071 | orchestrator | Wednesday 14 May 2025 02:17:54 +0000 (0:00:01.891) 0:00:03.979 ********* 2025-05-14 02:17:55.288017 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:17:55.383278 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:55.835806 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:17:55.838141 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:17:55.841106 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:17:55.844388 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:17:55.844415 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:17:55.844503 | orchestrator | 2025-05-14 02:17:55.844790 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-14 02:17:55.848289 | orchestrator | Wednesday 14 May 2025 02:17:55 +0000 (0:00:01.180) 0:00:05.160 ********* 2025-05-14 02:17:57.114799 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:57.114938 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:57.115139 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:57.115158 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:57.115552 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:57.115912 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:57.116356 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:57.117108 | orchestrator | 2025-05-14 02:17:57.120209 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-14 02:17:57.120502 | orchestrator | Wednesday 14 May 2025 02:17:57 +0000 (0:00:01.285) 0:00:06.446 ********* 2025-05-14 02:17:57.375047 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:17:57.461511 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:17:57.568011 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:17:57.659359 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:17:57.796460 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:57.797591 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:17:57.798964 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:17:57.801118 | orchestrator | 2025-05-14 02:17:57.802458 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-14 02:17:57.802740 | orchestrator | Wednesday 14 May 2025 02:17:57 +0000 (0:00:00.683) 0:00:07.129 ********* 2025-05-14 02:18:10.563591 | orchestrator | changed: [testbed-manager] 2025-05-14 02:18:10.563813 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:18:10.567029 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:18:10.567161 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:18:10.568264 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:18:10.568586 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:18:10.572545 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:18:10.574440 | orchestrator | 2025-05-14 02:18:10.575374 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-14 02:18:10.575873 | orchestrator | Wednesday 14 May 2025 02:18:10 +0000 (0:00:12.761) 0:00:19.890 ********* 2025-05-14 02:18:11.614346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:18:11.614483 | orchestrator | 2025-05-14 02:18:11.614897 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-14 02:18:11.615396 | orchestrator | Wednesday 14 May 2025 02:18:11 +0000 (0:00:01.056) 0:00:20.947 ********* 2025-05-14 02:18:13.463288 | orchestrator | changed: [testbed-manager] 2025-05-14 02:18:13.463543 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:18:13.465597 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:18:13.465622 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:18:13.467571 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:18:13.468630 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:18:13.469549 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:18:13.471056 | orchestrator | 2025-05-14 02:18:13.471843 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:18:13.473254 | orchestrator | 2025-05-14 02:18:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:18:13.473337 | orchestrator | 2025-05-14 02:18:13 | INFO  | Please wait and do not abort execution. 2025-05-14 02:18:13.474317 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:18:13.475506 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:13.480337 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:13.480365 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:13.480378 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:13.481253 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:13.482496 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:13.484292 | orchestrator | 2025-05-14 02:18:13.484326 | orchestrator | Wednesday 14 May 2025 02:18:13 +0000 (0:00:01.849) 0:00:22.796 ********* 2025-05-14 02:18:13.484341 | orchestrator | =============================================================================== 2025-05-14 02:18:13.484387 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.76s 2025-05-14 02:18:13.484447 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.89s 2025-05-14 02:18:13.485245 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.85s 2025-05-14 02:18:13.485462 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.29s 2025-05-14 02:18:13.485886 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.19s 2025-05-14 02:18:13.486372 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-05-14 02:18:13.486815 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.06s 2025-05-14 02:18:13.487828 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.70s 2025-05-14 02:18:13.487849 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.68s 2025-05-14 02:18:14.100990 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-14 02:18:15.704158 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-14 02:18:15.704281 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-14 02:18:15.704301 | orchestrator | + local max_attempts=60 2025-05-14 02:18:15.704321 | orchestrator | + local name=ceph-ansible 2025-05-14 02:18:15.704340 | orchestrator | + local attempt_num=1 2025-05-14 02:18:15.704884 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-14 02:18:15.743135 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:18:15.743273 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-14 02:18:15.743289 | orchestrator | + local max_attempts=60 2025-05-14 02:18:15.743302 | orchestrator | + local name=kolla-ansible 2025-05-14 02:18:15.743314 | orchestrator | + local attempt_num=1 2025-05-14 02:18:15.743424 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-14 02:18:15.774074 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:18:15.774176 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-14 02:18:15.774191 | orchestrator | + local max_attempts=60 2025-05-14 02:18:15.774203 | orchestrator | + local name=osism-ansible 2025-05-14 02:18:15.774213 | orchestrator | + local attempt_num=1 2025-05-14 02:18:15.774340 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-14 02:18:15.807892 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:18:15.808025 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-14 02:18:15.808039 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-14 02:18:15.974872 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-14 02:18:16.124838 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-14 02:18:16.266769 | orchestrator | ARA in osism-ansible already disabled. 2025-05-14 02:18:16.422841 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-14 02:18:16.423779 | orchestrator | + osism apply gather-facts 2025-05-14 02:18:17.895075 | orchestrator | 2025-05-14 02:18:17 | INFO  | Task a436feb3-480f-4011-adac-0b24e98f7eb7 (gather-facts) was prepared for execution. 2025-05-14 02:18:17.895196 | orchestrator | 2025-05-14 02:18:17 | INFO  | It takes a moment until task a436feb3-480f-4011-adac-0b24e98f7eb7 (gather-facts) has been started and output is visible here. 2025-05-14 02:18:20.885617 | orchestrator | 2025-05-14 02:18:20.885935 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 02:18:20.885964 | orchestrator | 2025-05-14 02:18:20.886971 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:18:20.887313 | orchestrator | Wednesday 14 May 2025 02:18:20 +0000 (0:00:00.147) 0:00:00.147 ********* 2025-05-14 02:18:25.779870 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:18:25.780169 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:18:25.781411 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:18:25.781924 | orchestrator | ok: [testbed-manager] 2025-05-14 02:18:25.783059 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:18:25.783827 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:18:25.785197 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:18:25.787853 | orchestrator | 2025-05-14 02:18:25.788612 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-14 02:18:25.789379 | orchestrator | 2025-05-14 02:18:25.790228 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-14 02:18:25.790904 | orchestrator | Wednesday 14 May 2025 02:18:25 +0000 (0:00:04.900) 0:00:05.048 ********* 2025-05-14 02:18:25.961567 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:18:26.032838 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:18:26.108066 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:18:26.186133 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:18:26.263212 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:18:26.303999 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:18:26.304916 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:18:26.304968 | orchestrator | 2025-05-14 02:18:26.305294 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:18:26.306195 | orchestrator | 2025-05-14 02:18:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:18:26.306314 | orchestrator | 2025-05-14 02:18:26 | INFO  | Please wait and do not abort execution. 2025-05-14 02:18:26.307402 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:26.307869 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:26.308962 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:26.309058 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:26.309774 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:26.309968 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:26.310403 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:26.310896 | orchestrator | 2025-05-14 02:18:26.311258 | orchestrator | Wednesday 14 May 2025 02:18:26 +0000 (0:00:00.524) 0:00:05.572 ********* 2025-05-14 02:18:26.311667 | orchestrator | =============================================================================== 2025-05-14 02:18:26.312003 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.90s 2025-05-14 02:18:26.312353 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-05-14 02:18:26.894440 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-14 02:18:26.908223 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-14 02:18:26.929292 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-14 02:18:26.944523 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-14 02:18:26.958684 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-14 02:18:26.970736 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-14 02:18:26.984089 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-14 02:18:26.999061 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-14 02:18:27.023832 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-14 02:18:27.037585 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-14 02:18:27.048603 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-14 02:18:27.062705 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-14 02:18:27.076351 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-14 02:18:27.094752 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-14 02:18:27.114142 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-14 02:18:27.125755 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-14 02:18:27.145556 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-14 02:18:27.156861 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-14 02:18:27.171853 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-14 02:18:27.188258 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-14 02:18:27.203863 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-14 02:18:27.299642 | orchestrator | ok: Runtime: 0:25:27.340781 2025-05-14 02:18:27.404700 | 2025-05-14 02:18:27.404857 | TASK [Deploy services] 2025-05-14 02:18:27.941046 | orchestrator | skipping: Conditional result was False 2025-05-14 02:18:27.959543 | 2025-05-14 02:18:27.959744 | TASK [Deploy in a nutshell] 2025-05-14 02:18:28.688003 | orchestrator | + set -e 2025-05-14 02:18:28.688201 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 02:18:28.688231 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 02:18:28.688264 | orchestrator | ++ INTERACTIVE=false 2025-05-14 02:18:28.688286 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 02:18:28.688305 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 02:18:28.688319 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 02:18:28.688365 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 02:18:28.688394 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 02:18:28.688409 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 02:18:28.688425 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 02:18:28.688437 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 02:18:28.688456 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 02:18:28.688467 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-14 02:18:28.688502 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-14 02:18:28.688514 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 02:18:28.688528 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 02:18:28.688539 | orchestrator | ++ export ARA=false 2025-05-14 02:18:28.688551 | orchestrator | ++ ARA=false 2025-05-14 02:18:28.688562 | orchestrator | ++ export TEMPEST=false 2025-05-14 02:18:28.688574 | orchestrator | ++ TEMPEST=false 2025-05-14 02:18:28.688585 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 02:18:28.688595 | orchestrator | ++ IS_ZUUL=true 2025-05-14 02:18:28.688607 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-05-14 02:18:28.688618 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-05-14 02:18:28.688629 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 02:18:28.688640 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 02:18:28.688651 | orchestrator | 2025-05-14 02:18:28.688663 | orchestrator | # PULL IMAGES 2025-05-14 02:18:28.688673 | orchestrator | 2025-05-14 02:18:28.688684 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 02:18:28.688695 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 02:18:28.688706 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 02:18:28.688766 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 02:18:28.688777 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 02:18:28.688788 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 02:18:28.688799 | orchestrator | + echo 2025-05-14 02:18:28.688810 | orchestrator | + echo '# PULL IMAGES' 2025-05-14 02:18:28.688821 | orchestrator | + echo 2025-05-14 02:18:28.689649 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-14 02:18:28.750691 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-14 02:18:28.750793 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-14 02:18:30.138478 | orchestrator | 2025-05-14 02:18:30 | INFO  | Trying to run play pull-images in environment custom 2025-05-14 02:18:30.188238 | orchestrator | 2025-05-14 02:18:30 | INFO  | Task f7e9ed91-5e7a-42aa-b01c-2ee01dc2a7cf (pull-images) was prepared for execution. 2025-05-14 02:18:30.188339 | orchestrator | 2025-05-14 02:18:30 | INFO  | It takes a moment until task f7e9ed91-5e7a-42aa-b01c-2ee01dc2a7cf (pull-images) has been started and output is visible here. 2025-05-14 02:18:33.248773 | orchestrator | 2025-05-14 02:18:33.249569 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-14 02:18:33.250570 | orchestrator | 2025-05-14 02:18:33.250873 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-14 02:18:33.253171 | orchestrator | Wednesday 14 May 2025 02:18:33 +0000 (0:00:00.142) 0:00:00.142 ********* 2025-05-14 02:19:10.883198 | orchestrator | changed: [testbed-manager] 2025-05-14 02:19:10.883325 | orchestrator | 2025-05-14 02:19:10.883344 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-14 02:19:10.883358 | orchestrator | Wednesday 14 May 2025 02:19:10 +0000 (0:00:37.631) 0:00:37.774 ********* 2025-05-14 02:19:58.125566 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-14 02:19:58.125674 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-14 02:19:58.125688 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-14 02:19:58.125712 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-14 02:19:58.125719 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-14 02:19:58.125736 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-14 02:19:58.125907 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-14 02:19:58.126413 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-14 02:19:58.126911 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-14 02:19:58.127289 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-14 02:19:58.130531 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-14 02:19:58.130940 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-14 02:19:58.133578 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-14 02:19:58.133585 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-14 02:19:58.133590 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-14 02:19:58.133594 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-14 02:19:58.133598 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-14 02:19:58.133603 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-14 02:19:58.133607 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-14 02:19:58.133839 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-14 02:19:58.134094 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-14 02:19:58.134385 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-14 02:19:58.134774 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-14 02:19:58.135077 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-14 02:19:58.135333 | orchestrator | 2025-05-14 02:19:58.135604 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:19:58.135959 | orchestrator | 2025-05-14 02:19:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:19:58.136076 | orchestrator | 2025-05-14 02:19:58 | INFO  | Please wait and do not abort execution. 2025-05-14 02:19:58.136488 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:19:58.136822 | orchestrator | 2025-05-14 02:19:58.137074 | orchestrator | Wednesday 14 May 2025 02:19:58 +0000 (0:00:47.243) 0:01:25.017 ********* 2025-05-14 02:19:58.137345 | orchestrator | =============================================================================== 2025-05-14 02:19:58.137737 | orchestrator | Pull other images ------------------------------------------------------ 47.24s 2025-05-14 02:19:58.138039 | orchestrator | Pull keystone image ---------------------------------------------------- 37.63s 2025-05-14 02:20:00.360648 | orchestrator | 2025-05-14 02:20:00 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-14 02:20:00.422128 | orchestrator | 2025-05-14 02:20:00 | INFO  | Task 67967382-faa2-4115-9e0e-3fb3a84a2047 (wipe-partitions) was prepared for execution. 2025-05-14 02:20:00.422198 | orchestrator | 2025-05-14 02:20:00 | INFO  | It takes a moment until task 67967382-faa2-4115-9e0e-3fb3a84a2047 (wipe-partitions) has been started and output is visible here. 2025-05-14 02:20:03.625438 | orchestrator | 2025-05-14 02:20:03.625561 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-14 02:20:03.625579 | orchestrator | 2025-05-14 02:20:03.626588 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-14 02:20:03.626689 | orchestrator | Wednesday 14 May 2025 02:20:03 +0000 (0:00:00.126) 0:00:00.127 ********* 2025-05-14 02:20:04.169921 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:20:04.171874 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:20:04.171908 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:20:04.173138 | orchestrator | 2025-05-14 02:20:04.174190 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-14 02:20:04.174666 | orchestrator | Wednesday 14 May 2025 02:20:04 +0000 (0:00:00.545) 0:00:00.672 ********* 2025-05-14 02:20:04.342322 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:04.441402 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:04.441935 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:20:04.442979 | orchestrator | 2025-05-14 02:20:04.443983 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-14 02:20:04.444586 | orchestrator | Wednesday 14 May 2025 02:20:04 +0000 (0:00:00.272) 0:00:00.945 ********* 2025-05-14 02:20:05.157487 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:20:05.158410 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:20:05.158854 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:20:05.163542 | orchestrator | 2025-05-14 02:20:05.163567 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-14 02:20:05.163580 | orchestrator | Wednesday 14 May 2025 02:20:05 +0000 (0:00:00.711) 0:00:01.656 ********* 2025-05-14 02:20:05.327257 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:05.426402 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:05.426560 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:20:05.427091 | orchestrator | 2025-05-14 02:20:05.427490 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-14 02:20:05.429114 | orchestrator | Wednesday 14 May 2025 02:20:05 +0000 (0:00:00.273) 0:00:01.930 ********* 2025-05-14 02:20:06.579289 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-14 02:20:06.579656 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-14 02:20:06.580234 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-14 02:20:06.580276 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-14 02:20:06.580295 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-14 02:20:06.580516 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-14 02:20:06.582850 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-14 02:20:06.583459 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-14 02:20:06.583941 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-14 02:20:06.585423 | orchestrator | 2025-05-14 02:20:06.586434 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-14 02:20:06.586806 | orchestrator | Wednesday 14 May 2025 02:20:06 +0000 (0:00:01.153) 0:00:03.083 ********* 2025-05-14 02:20:07.940231 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-14 02:20:07.940375 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-14 02:20:07.940452 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-14 02:20:07.941189 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-14 02:20:07.941407 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-14 02:20:07.942156 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-14 02:20:07.946751 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-14 02:20:07.946789 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-14 02:20:07.946800 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-14 02:20:07.946812 | orchestrator | 2025-05-14 02:20:07.946825 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-14 02:20:07.946838 | orchestrator | Wednesday 14 May 2025 02:20:07 +0000 (0:00:01.357) 0:00:04.441 ********* 2025-05-14 02:20:10.158806 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-14 02:20:10.158980 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-14 02:20:10.161063 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-14 02:20:10.161452 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-14 02:20:10.162423 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-14 02:20:10.162807 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-14 02:20:10.163135 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-14 02:20:10.163578 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-14 02:20:10.163912 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-14 02:20:10.164266 | orchestrator | 2025-05-14 02:20:10.164583 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-14 02:20:10.164932 | orchestrator | Wednesday 14 May 2025 02:20:10 +0000 (0:00:02.216) 0:00:06.657 ********* 2025-05-14 02:20:10.797238 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:20:10.797343 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:20:10.799990 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:20:10.800015 | orchestrator | 2025-05-14 02:20:10.804541 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-14 02:20:10.804835 | orchestrator | Wednesday 14 May 2025 02:20:10 +0000 (0:00:00.642) 0:00:07.300 ********* 2025-05-14 02:20:11.547726 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:20:11.549089 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:20:11.549733 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:20:11.552980 | orchestrator | 2025-05-14 02:20:11.553024 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:20:11.553051 | orchestrator | 2025-05-14 02:20:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:20:11.553060 | orchestrator | 2025-05-14 02:20:11 | INFO  | Please wait and do not abort execution. 2025-05-14 02:20:11.553101 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:20:11.554239 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:20:11.554668 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:20:11.558988 | orchestrator | 2025-05-14 02:20:11.559565 | orchestrator | Wednesday 14 May 2025 02:20:11 +0000 (0:00:00.747) 0:00:08.047 ********* 2025-05-14 02:20:11.559743 | orchestrator | =============================================================================== 2025-05-14 02:20:11.560747 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.22s 2025-05-14 02:20:11.561174 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2025-05-14 02:20:11.561302 | orchestrator | Check device availability ----------------------------------------------- 1.15s 2025-05-14 02:20:11.561794 | orchestrator | Request device events from the kernel ----------------------------------- 0.75s 2025-05-14 02:20:11.562045 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.71s 2025-05-14 02:20:11.562335 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2025-05-14 02:20:11.562755 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2025-05-14 02:20:11.564639 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-05-14 02:20:11.564950 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-05-14 02:20:13.654194 | orchestrator | 2025-05-14 02:20:13 | INFO  | Task 8c3e2332-0771-4f64-82d0-eb0b59f2a12e (facts) was prepared for execution. 2025-05-14 02:20:13.654302 | orchestrator | 2025-05-14 02:20:13 | INFO  | It takes a moment until task 8c3e2332-0771-4f64-82d0-eb0b59f2a12e (facts) has been started and output is visible here. 2025-05-14 02:20:17.031988 | orchestrator | 2025-05-14 02:20:17.034169 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-14 02:20:17.035149 | orchestrator | 2025-05-14 02:20:17.038247 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-14 02:20:17.039044 | orchestrator | Wednesday 14 May 2025 02:20:17 +0000 (0:00:00.202) 0:00:00.202 ********* 2025-05-14 02:20:18.032796 | orchestrator | ok: [testbed-manager] 2025-05-14 02:20:18.032868 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:20:18.033891 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:20:18.035446 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:20:18.039627 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:20:18.040123 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:20:18.043308 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:20:18.043399 | orchestrator | 2025-05-14 02:20:18.044072 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-14 02:20:18.044476 | orchestrator | Wednesday 14 May 2025 02:20:18 +0000 (0:00:01.001) 0:00:01.203 ********* 2025-05-14 02:20:18.216864 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:20:18.316469 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:20:18.432858 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:20:18.526525 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:20:18.622116 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:19.419823 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:19.422584 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:20:19.424000 | orchestrator | 2025-05-14 02:20:19.424736 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 02:20:19.426082 | orchestrator | 2025-05-14 02:20:19.427952 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:20:19.428014 | orchestrator | Wednesday 14 May 2025 02:20:19 +0000 (0:00:01.388) 0:00:02.591 ********* 2025-05-14 02:20:24.724432 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:20:24.724558 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:20:24.725527 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:20:24.729083 | orchestrator | ok: [testbed-manager] 2025-05-14 02:20:24.729564 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:20:24.730161 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:20:24.730940 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:20:24.731822 | orchestrator | 2025-05-14 02:20:24.732100 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-14 02:20:24.732587 | orchestrator | 2025-05-14 02:20:24.733108 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-14 02:20:24.734682 | orchestrator | Wednesday 14 May 2025 02:20:24 +0000 (0:00:05.306) 0:00:07.898 ********* 2025-05-14 02:20:24.982272 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:20:25.050392 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:20:25.117546 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:20:25.184066 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:20:25.249527 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:25.275139 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:25.275396 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:20:25.276988 | orchestrator | 2025-05-14 02:20:25.277722 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:20:25.277918 | orchestrator | 2025-05-14 02:20:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:20:25.277937 | orchestrator | 2025-05-14 02:20:25 | INFO  | Please wait and do not abort execution. 2025-05-14 02:20:25.279261 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:20:25.280218 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:20:25.280474 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:20:25.281261 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:20:25.281380 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:20:25.281968 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:20:25.282465 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:20:25.283157 | orchestrator | 2025-05-14 02:20:25.283405 | orchestrator | Wednesday 14 May 2025 02:20:25 +0000 (0:00:00.550) 0:00:08.448 ********* 2025-05-14 02:20:25.284331 | orchestrator | =============================================================================== 2025-05-14 02:20:25.284464 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.31s 2025-05-14 02:20:25.285040 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2025-05-14 02:20:25.285779 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.00s 2025-05-14 02:20:25.286618 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-05-14 02:20:27.078202 | orchestrator | 2025-05-14 02:20:27 | INFO  | Task b684a221-2c24-424f-bc5f-bd2270fa9ecb (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-14 02:20:27.078270 | orchestrator | 2025-05-14 02:20:27 | INFO  | It takes a moment until task b684a221-2c24-424f-bc5f-bd2270fa9ecb (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-14 02:20:30.213590 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:20:30.764248 | orchestrator | 2025-05-14 02:20:30.764898 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-14 02:20:30.769325 | orchestrator | 2025-05-14 02:20:30.769859 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:20:30.770473 | orchestrator | Wednesday 14 May 2025 02:20:30 +0000 (0:00:00.459) 0:00:00.459 ********* 2025-05-14 02:20:31.025739 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 02:20:31.029004 | orchestrator | 2025-05-14 02:20:31.029876 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:20:31.031237 | orchestrator | Wednesday 14 May 2025 02:20:31 +0000 (0:00:00.263) 0:00:00.722 ********* 2025-05-14 02:20:31.260899 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:20:31.262873 | orchestrator | 2025-05-14 02:20:31.262954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:31.263433 | orchestrator | Wednesday 14 May 2025 02:20:31 +0000 (0:00:00.237) 0:00:00.959 ********* 2025-05-14 02:20:31.795944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-14 02:20:31.797352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-14 02:20:31.798196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-14 02:20:31.799921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-14 02:20:31.801458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-14 02:20:31.802944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-14 02:20:31.803277 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-14 02:20:31.804851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-14 02:20:31.805590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-14 02:20:31.806324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-14 02:20:31.807274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-14 02:20:31.807824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-14 02:20:31.808185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-14 02:20:31.808507 | orchestrator | 2025-05-14 02:20:31.809100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:31.809592 | orchestrator | Wednesday 14 May 2025 02:20:31 +0000 (0:00:00.528) 0:00:01.488 ********* 2025-05-14 02:20:31.984376 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:31.984612 | orchestrator | 2025-05-14 02:20:31.984666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:31.984918 | orchestrator | Wednesday 14 May 2025 02:20:31 +0000 (0:00:00.195) 0:00:01.683 ********* 2025-05-14 02:20:32.179920 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:32.180111 | orchestrator | 2025-05-14 02:20:32.180423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:32.180808 | orchestrator | Wednesday 14 May 2025 02:20:32 +0000 (0:00:00.194) 0:00:01.878 ********* 2025-05-14 02:20:32.378132 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:32.379546 | orchestrator | 2025-05-14 02:20:32.385461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:32.385523 | orchestrator | Wednesday 14 May 2025 02:20:32 +0000 (0:00:00.198) 0:00:02.076 ********* 2025-05-14 02:20:32.597842 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:32.598224 | orchestrator | 2025-05-14 02:20:32.599218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:32.601067 | orchestrator | Wednesday 14 May 2025 02:20:32 +0000 (0:00:00.219) 0:00:02.296 ********* 2025-05-14 02:20:32.798936 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:32.800780 | orchestrator | 2025-05-14 02:20:32.802785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:32.805070 | orchestrator | Wednesday 14 May 2025 02:20:32 +0000 (0:00:00.199) 0:00:02.495 ********* 2025-05-14 02:20:33.012000 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:33.012110 | orchestrator | 2025-05-14 02:20:33.012559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:33.014829 | orchestrator | Wednesday 14 May 2025 02:20:33 +0000 (0:00:00.211) 0:00:02.707 ********* 2025-05-14 02:20:33.211348 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:33.214539 | orchestrator | 2025-05-14 02:20:33.214576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:33.215241 | orchestrator | Wednesday 14 May 2025 02:20:33 +0000 (0:00:00.202) 0:00:02.909 ********* 2025-05-14 02:20:33.417118 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:33.418476 | orchestrator | 2025-05-14 02:20:33.418796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:33.419388 | orchestrator | Wednesday 14 May 2025 02:20:33 +0000 (0:00:00.206) 0:00:03.116 ********* 2025-05-14 02:20:34.161507 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1) 2025-05-14 02:20:34.162972 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1) 2025-05-14 02:20:34.163381 | orchestrator | 2025-05-14 02:20:34.164011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:34.164102 | orchestrator | Wednesday 14 May 2025 02:20:34 +0000 (0:00:00.744) 0:00:03.861 ********* 2025-05-14 02:20:35.059214 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6c9e420d-0c60-4ebc-ac19-f905b2b7a82f) 2025-05-14 02:20:35.059437 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6c9e420d-0c60-4ebc-ac19-f905b2b7a82f) 2025-05-14 02:20:35.059472 | orchestrator | 2025-05-14 02:20:35.060136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:35.064168 | orchestrator | Wednesday 14 May 2025 02:20:35 +0000 (0:00:00.894) 0:00:04.756 ********* 2025-05-14 02:20:35.592848 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7c39c8ea-7878-4e89-b4ec-61bbe868aea7) 2025-05-14 02:20:35.593051 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7c39c8ea-7878-4e89-b4ec-61bbe868aea7) 2025-05-14 02:20:35.593223 | orchestrator | 2025-05-14 02:20:35.593768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:35.594182 | orchestrator | Wednesday 14 May 2025 02:20:35 +0000 (0:00:00.536) 0:00:05.292 ********* 2025-05-14 02:20:36.068806 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e31a2ff7-84d9-48c9-b0e1-1526f23b46b1) 2025-05-14 02:20:36.068988 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e31a2ff7-84d9-48c9-b0e1-1526f23b46b1) 2025-05-14 02:20:36.069126 | orchestrator | 2025-05-14 02:20:36.070478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:36.070669 | orchestrator | Wednesday 14 May 2025 02:20:36 +0000 (0:00:00.473) 0:00:05.766 ********* 2025-05-14 02:20:36.505156 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:20:36.506634 | orchestrator | 2025-05-14 02:20:36.506746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:36.506762 | orchestrator | Wednesday 14 May 2025 02:20:36 +0000 (0:00:00.434) 0:00:06.201 ********* 2025-05-14 02:20:36.933066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-14 02:20:36.933170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-14 02:20:36.933185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-14 02:20:36.933197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-14 02:20:36.933208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-14 02:20:36.933343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-14 02:20:36.933634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-14 02:20:36.934100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-14 02:20:36.935301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-14 02:20:36.935609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-14 02:20:36.936157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-14 02:20:36.936325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-14 02:20:36.937243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-14 02:20:36.937266 | orchestrator | 2025-05-14 02:20:36.937280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:36.937457 | orchestrator | Wednesday 14 May 2025 02:20:36 +0000 (0:00:00.427) 0:00:06.628 ********* 2025-05-14 02:20:37.141001 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:37.141106 | orchestrator | 2025-05-14 02:20:37.141122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:37.141136 | orchestrator | Wednesday 14 May 2025 02:20:37 +0000 (0:00:00.210) 0:00:06.839 ********* 2025-05-14 02:20:37.341030 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:37.342460 | orchestrator | 2025-05-14 02:20:37.342637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:37.343531 | orchestrator | Wednesday 14 May 2025 02:20:37 +0000 (0:00:00.201) 0:00:07.040 ********* 2025-05-14 02:20:37.552295 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:37.553512 | orchestrator | 2025-05-14 02:20:37.553673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:37.554195 | orchestrator | Wednesday 14 May 2025 02:20:37 +0000 (0:00:00.211) 0:00:07.251 ********* 2025-05-14 02:20:37.765036 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:37.766404 | orchestrator | 2025-05-14 02:20:37.766444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:37.766963 | orchestrator | Wednesday 14 May 2025 02:20:37 +0000 (0:00:00.212) 0:00:07.463 ********* 2025-05-14 02:20:38.424894 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:38.424984 | orchestrator | 2025-05-14 02:20:38.425861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:38.425931 | orchestrator | Wednesday 14 May 2025 02:20:38 +0000 (0:00:00.659) 0:00:08.123 ********* 2025-05-14 02:20:38.692712 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:38.692830 | orchestrator | 2025-05-14 02:20:38.695523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:38.696237 | orchestrator | Wednesday 14 May 2025 02:20:38 +0000 (0:00:00.267) 0:00:08.391 ********* 2025-05-14 02:20:38.900473 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:38.900574 | orchestrator | 2025-05-14 02:20:38.900645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:38.901069 | orchestrator | Wednesday 14 May 2025 02:20:38 +0000 (0:00:00.205) 0:00:08.596 ********* 2025-05-14 02:20:39.094654 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:39.096176 | orchestrator | 2025-05-14 02:20:39.096631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:39.096974 | orchestrator | Wednesday 14 May 2025 02:20:39 +0000 (0:00:00.195) 0:00:08.792 ********* 2025-05-14 02:20:39.742859 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-14 02:20:39.742990 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-14 02:20:39.743074 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-14 02:20:39.743476 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-14 02:20:39.745241 | orchestrator | 2025-05-14 02:20:39.745578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:39.746158 | orchestrator | Wednesday 14 May 2025 02:20:39 +0000 (0:00:00.649) 0:00:09.441 ********* 2025-05-14 02:20:39.930173 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:39.930584 | orchestrator | 2025-05-14 02:20:39.932123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:39.933301 | orchestrator | Wednesday 14 May 2025 02:20:39 +0000 (0:00:00.184) 0:00:09.626 ********* 2025-05-14 02:20:40.102461 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:40.106352 | orchestrator | 2025-05-14 02:20:40.106528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:40.106802 | orchestrator | Wednesday 14 May 2025 02:20:40 +0000 (0:00:00.175) 0:00:09.802 ********* 2025-05-14 02:20:40.310946 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:40.311820 | orchestrator | 2025-05-14 02:20:40.312587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:40.313225 | orchestrator | Wednesday 14 May 2025 02:20:40 +0000 (0:00:00.208) 0:00:10.010 ********* 2025-05-14 02:20:40.503769 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:40.504969 | orchestrator | 2025-05-14 02:20:40.504983 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-14 02:20:40.505884 | orchestrator | Wednesday 14 May 2025 02:20:40 +0000 (0:00:00.191) 0:00:10.202 ********* 2025-05-14 02:20:40.678509 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-14 02:20:40.681245 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-14 02:20:40.681479 | orchestrator | 2025-05-14 02:20:40.681992 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-14 02:20:40.693474 | orchestrator | Wednesday 14 May 2025 02:20:40 +0000 (0:00:00.174) 0:00:10.377 ********* 2025-05-14 02:20:40.871656 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:40.871950 | orchestrator | 2025-05-14 02:20:40.871975 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-14 02:20:40.872876 | orchestrator | Wednesday 14 May 2025 02:20:40 +0000 (0:00:00.193) 0:00:10.570 ********* 2025-05-14 02:20:41.255645 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:41.256406 | orchestrator | 2025-05-14 02:20:41.256511 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-14 02:20:41.256668 | orchestrator | Wednesday 14 May 2025 02:20:41 +0000 (0:00:00.385) 0:00:10.955 ********* 2025-05-14 02:20:41.391771 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:41.394429 | orchestrator | 2025-05-14 02:20:41.395265 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-14 02:20:41.396320 | orchestrator | Wednesday 14 May 2025 02:20:41 +0000 (0:00:00.135) 0:00:11.090 ********* 2025-05-14 02:20:41.538484 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:20:41.539293 | orchestrator | 2025-05-14 02:20:41.539349 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-14 02:20:41.541209 | orchestrator | Wednesday 14 May 2025 02:20:41 +0000 (0:00:00.146) 0:00:11.237 ********* 2025-05-14 02:20:41.706184 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'caf94b5f-07a0-5316-9d7c-8f668ab64c5d'}}) 2025-05-14 02:20:41.708518 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0a91196-50f5-599a-8231-3d981ca1eca9'}}) 2025-05-14 02:20:41.709314 | orchestrator | 2025-05-14 02:20:41.710314 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-14 02:20:41.711097 | orchestrator | Wednesday 14 May 2025 02:20:41 +0000 (0:00:00.167) 0:00:11.405 ********* 2025-05-14 02:20:41.867629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'caf94b5f-07a0-5316-9d7c-8f668ab64c5d'}})  2025-05-14 02:20:41.868441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0a91196-50f5-599a-8231-3d981ca1eca9'}})  2025-05-14 02:20:41.869269 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:41.870588 | orchestrator | 2025-05-14 02:20:41.871897 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-14 02:20:41.872752 | orchestrator | Wednesday 14 May 2025 02:20:41 +0000 (0:00:00.158) 0:00:11.564 ********* 2025-05-14 02:20:42.044147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'caf94b5f-07a0-5316-9d7c-8f668ab64c5d'}})  2025-05-14 02:20:42.045836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0a91196-50f5-599a-8231-3d981ca1eca9'}})  2025-05-14 02:20:42.048318 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:42.048346 | orchestrator | 2025-05-14 02:20:42.049718 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-14 02:20:42.050547 | orchestrator | Wednesday 14 May 2025 02:20:42 +0000 (0:00:00.177) 0:00:11.741 ********* 2025-05-14 02:20:42.248961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'caf94b5f-07a0-5316-9d7c-8f668ab64c5d'}})  2025-05-14 02:20:42.249871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0a91196-50f5-599a-8231-3d981ca1eca9'}})  2025-05-14 02:20:42.250964 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:42.251174 | orchestrator | 2025-05-14 02:20:42.251861 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-14 02:20:42.252480 | orchestrator | Wednesday 14 May 2025 02:20:42 +0000 (0:00:00.206) 0:00:11.947 ********* 2025-05-14 02:20:42.438780 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:20:42.440817 | orchestrator | 2025-05-14 02:20:42.442980 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-14 02:20:42.445590 | orchestrator | Wednesday 14 May 2025 02:20:42 +0000 (0:00:00.188) 0:00:12.136 ********* 2025-05-14 02:20:42.627167 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:20:42.627436 | orchestrator | 2025-05-14 02:20:42.629140 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-14 02:20:42.629355 | orchestrator | Wednesday 14 May 2025 02:20:42 +0000 (0:00:00.187) 0:00:12.323 ********* 2025-05-14 02:20:42.785920 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:42.786042 | orchestrator | 2025-05-14 02:20:42.786639 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-14 02:20:42.787204 | orchestrator | Wednesday 14 May 2025 02:20:42 +0000 (0:00:00.162) 0:00:12.485 ********* 2025-05-14 02:20:42.931918 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:42.932488 | orchestrator | 2025-05-14 02:20:42.935348 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-14 02:20:42.935712 | orchestrator | Wednesday 14 May 2025 02:20:42 +0000 (0:00:00.144) 0:00:12.630 ********* 2025-05-14 02:20:43.105573 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:43.106358 | orchestrator | 2025-05-14 02:20:43.109438 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-14 02:20:43.109807 | orchestrator | Wednesday 14 May 2025 02:20:43 +0000 (0:00:00.173) 0:00:12.804 ********* 2025-05-14 02:20:43.491279 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:20:43.491401 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:20:43.491465 | orchestrator |  "sdb": { 2025-05-14 02:20:43.491907 | orchestrator |  "osd_lvm_uuid": "caf94b5f-07a0-5316-9d7c-8f668ab64c5d" 2025-05-14 02:20:43.495426 | orchestrator |  }, 2025-05-14 02:20:43.495449 | orchestrator |  "sdc": { 2025-05-14 02:20:43.495480 | orchestrator |  "osd_lvm_uuid": "a0a91196-50f5-599a-8231-3d981ca1eca9" 2025-05-14 02:20:43.495490 | orchestrator |  } 2025-05-14 02:20:43.495499 | orchestrator |  } 2025-05-14 02:20:43.495508 | orchestrator | } 2025-05-14 02:20:43.495517 | orchestrator | 2025-05-14 02:20:43.495559 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-14 02:20:43.496849 | orchestrator | Wednesday 14 May 2025 02:20:43 +0000 (0:00:00.381) 0:00:13.186 ********* 2025-05-14 02:20:43.736843 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:43.737625 | orchestrator | 2025-05-14 02:20:43.738834 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-14 02:20:43.743577 | orchestrator | Wednesday 14 May 2025 02:20:43 +0000 (0:00:00.239) 0:00:13.425 ********* 2025-05-14 02:20:43.965179 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:43.965288 | orchestrator | 2025-05-14 02:20:43.965305 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-14 02:20:43.965388 | orchestrator | Wednesday 14 May 2025 02:20:43 +0000 (0:00:00.230) 0:00:13.656 ********* 2025-05-14 02:20:44.100193 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:20:44.105594 | orchestrator | 2025-05-14 02:20:44.105670 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-14 02:20:44.105714 | orchestrator | Wednesday 14 May 2025 02:20:44 +0000 (0:00:00.142) 0:00:13.799 ********* 2025-05-14 02:20:44.356639 | orchestrator | changed: [testbed-node-3] => { 2025-05-14 02:20:44.356925 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-14 02:20:44.356990 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:20:44.359855 | orchestrator |  "sdb": { 2025-05-14 02:20:44.361315 | orchestrator |  "osd_lvm_uuid": "caf94b5f-07a0-5316-9d7c-8f668ab64c5d" 2025-05-14 02:20:44.361388 | orchestrator |  }, 2025-05-14 02:20:44.362728 | orchestrator |  "sdc": { 2025-05-14 02:20:44.363012 | orchestrator |  "osd_lvm_uuid": "a0a91196-50f5-599a-8231-3d981ca1eca9" 2025-05-14 02:20:44.363142 | orchestrator |  } 2025-05-14 02:20:44.365438 | orchestrator |  }, 2025-05-14 02:20:44.365488 | orchestrator |  "lvm_volumes": [ 2025-05-14 02:20:44.365575 | orchestrator |  { 2025-05-14 02:20:44.366457 | orchestrator |  "data": "osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d", 2025-05-14 02:20:44.366486 | orchestrator |  "data_vg": "ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d" 2025-05-14 02:20:44.367529 | orchestrator |  }, 2025-05-14 02:20:44.368740 | orchestrator |  { 2025-05-14 02:20:44.369081 | orchestrator |  "data": "osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9", 2025-05-14 02:20:44.369191 | orchestrator |  "data_vg": "ceph-a0a91196-50f5-599a-8231-3d981ca1eca9" 2025-05-14 02:20:44.369566 | orchestrator |  } 2025-05-14 02:20:44.371262 | orchestrator |  ] 2025-05-14 02:20:44.371748 | orchestrator |  } 2025-05-14 02:20:44.371943 | orchestrator | } 2025-05-14 02:20:44.372009 | orchestrator | 2025-05-14 02:20:44.372506 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-14 02:20:44.373276 | orchestrator | Wednesday 14 May 2025 02:20:44 +0000 (0:00:00.256) 0:00:14.056 ********* 2025-05-14 02:20:46.372867 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 02:20:46.373910 | orchestrator | 2025-05-14 02:20:46.374068 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-14 02:20:46.377618 | orchestrator | 2025-05-14 02:20:46.378091 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:20:46.378490 | orchestrator | Wednesday 14 May 2025 02:20:46 +0000 (0:00:02.016) 0:00:16.072 ********* 2025-05-14 02:20:46.583398 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-14 02:20:46.584724 | orchestrator | 2025-05-14 02:20:46.585528 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:20:46.587427 | orchestrator | Wednesday 14 May 2025 02:20:46 +0000 (0:00:00.210) 0:00:16.283 ********* 2025-05-14 02:20:46.804199 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:20:46.804449 | orchestrator | 2025-05-14 02:20:46.804472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:46.804824 | orchestrator | Wednesday 14 May 2025 02:20:46 +0000 (0:00:00.219) 0:00:16.502 ********* 2025-05-14 02:20:47.203011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-14 02:20:47.203099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-14 02:20:47.203302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-14 02:20:47.204681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-14 02:20:47.205021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-14 02:20:47.206188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-14 02:20:47.206357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-14 02:20:47.206511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-14 02:20:47.206829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-14 02:20:47.208285 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-14 02:20:47.209473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-14 02:20:47.209926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-14 02:20:47.210356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-14 02:20:47.211189 | orchestrator | 2025-05-14 02:20:47.212715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:47.213100 | orchestrator | Wednesday 14 May 2025 02:20:47 +0000 (0:00:00.399) 0:00:16.901 ********* 2025-05-14 02:20:47.385240 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:47.389476 | orchestrator | 2025-05-14 02:20:47.394679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:47.394772 | orchestrator | Wednesday 14 May 2025 02:20:47 +0000 (0:00:00.181) 0:00:17.083 ********* 2025-05-14 02:20:47.606818 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:47.611278 | orchestrator | 2025-05-14 02:20:47.612343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:47.613106 | orchestrator | Wednesday 14 May 2025 02:20:47 +0000 (0:00:00.221) 0:00:17.305 ********* 2025-05-14 02:20:47.798242 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:47.799801 | orchestrator | 2025-05-14 02:20:47.800255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:47.800791 | orchestrator | Wednesday 14 May 2025 02:20:47 +0000 (0:00:00.190) 0:00:17.495 ********* 2025-05-14 02:20:48.217559 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:48.218406 | orchestrator | 2025-05-14 02:20:48.218842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:48.222123 | orchestrator | Wednesday 14 May 2025 02:20:48 +0000 (0:00:00.421) 0:00:17.917 ********* 2025-05-14 02:20:48.396245 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:48.399122 | orchestrator | 2025-05-14 02:20:48.399152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:48.400106 | orchestrator | Wednesday 14 May 2025 02:20:48 +0000 (0:00:00.178) 0:00:18.095 ********* 2025-05-14 02:20:48.585121 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:48.588374 | orchestrator | 2025-05-14 02:20:48.589577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:48.590164 | orchestrator | Wednesday 14 May 2025 02:20:48 +0000 (0:00:00.188) 0:00:18.283 ********* 2025-05-14 02:20:48.781531 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:48.781627 | orchestrator | 2025-05-14 02:20:48.781635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:48.781676 | orchestrator | Wednesday 14 May 2025 02:20:48 +0000 (0:00:00.198) 0:00:18.481 ********* 2025-05-14 02:20:48.961178 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:48.964049 | orchestrator | 2025-05-14 02:20:48.964555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:48.964850 | orchestrator | Wednesday 14 May 2025 02:20:48 +0000 (0:00:00.178) 0:00:18.660 ********* 2025-05-14 02:20:49.337101 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b) 2025-05-14 02:20:49.337764 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b) 2025-05-14 02:20:49.338870 | orchestrator | 2025-05-14 02:20:49.340834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:49.341297 | orchestrator | Wednesday 14 May 2025 02:20:49 +0000 (0:00:00.374) 0:00:19.034 ********* 2025-05-14 02:20:49.743262 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2fe9822d-742a-4109-b2fd-4f62bd011e9b) 2025-05-14 02:20:49.743412 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2fe9822d-742a-4109-b2fd-4f62bd011e9b) 2025-05-14 02:20:49.743494 | orchestrator | 2025-05-14 02:20:49.745431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:49.748418 | orchestrator | Wednesday 14 May 2025 02:20:49 +0000 (0:00:00.408) 0:00:19.442 ********* 2025-05-14 02:20:50.130329 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4bf8951c-ead1-422f-8e98-563fd238f873) 2025-05-14 02:20:50.130428 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4bf8951c-ead1-422f-8e98-563fd238f873) 2025-05-14 02:20:50.131469 | orchestrator | 2025-05-14 02:20:50.131652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:50.134434 | orchestrator | Wednesday 14 May 2025 02:20:50 +0000 (0:00:00.387) 0:00:19.830 ********* 2025-05-14 02:20:50.514331 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9158ba9c-f661-457a-83a0-7301d2e715e9) 2025-05-14 02:20:50.514480 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9158ba9c-f661-457a-83a0-7301d2e715e9) 2025-05-14 02:20:50.514871 | orchestrator | 2025-05-14 02:20:50.515105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:20:50.515480 | orchestrator | Wednesday 14 May 2025 02:20:50 +0000 (0:00:00.383) 0:00:20.213 ********* 2025-05-14 02:20:50.793738 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:20:50.793861 | orchestrator | 2025-05-14 02:20:50.793965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:50.794159 | orchestrator | Wednesday 14 May 2025 02:20:50 +0000 (0:00:00.280) 0:00:20.494 ********* 2025-05-14 02:20:51.326259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-14 02:20:51.326732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-14 02:20:51.327371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-14 02:20:51.331171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-14 02:20:51.332312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-14 02:20:51.332809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-14 02:20:51.333630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-14 02:20:51.334670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-14 02:20:51.335153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-14 02:20:51.335630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-14 02:20:51.336077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-14 02:20:51.340107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-14 02:20:51.340381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-14 02:20:51.340841 | orchestrator | 2025-05-14 02:20:51.341151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:51.341729 | orchestrator | Wednesday 14 May 2025 02:20:51 +0000 (0:00:00.530) 0:00:21.024 ********* 2025-05-14 02:20:51.526748 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:51.526848 | orchestrator | 2025-05-14 02:20:51.529438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:51.530830 | orchestrator | Wednesday 14 May 2025 02:20:51 +0000 (0:00:00.199) 0:00:21.223 ********* 2025-05-14 02:20:51.698471 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:51.699288 | orchestrator | 2025-05-14 02:20:51.699338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:51.699361 | orchestrator | Wednesday 14 May 2025 02:20:51 +0000 (0:00:00.175) 0:00:21.398 ********* 2025-05-14 02:20:51.876032 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:51.876153 | orchestrator | 2025-05-14 02:20:51.876257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:51.876479 | orchestrator | Wednesday 14 May 2025 02:20:51 +0000 (0:00:00.176) 0:00:21.575 ********* 2025-05-14 02:20:52.049860 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:52.052394 | orchestrator | 2025-05-14 02:20:52.052430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:52.052445 | orchestrator | Wednesday 14 May 2025 02:20:52 +0000 (0:00:00.172) 0:00:21.747 ********* 2025-05-14 02:20:52.233504 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:52.233882 | orchestrator | 2025-05-14 02:20:52.235815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:52.236278 | orchestrator | Wednesday 14 May 2025 02:20:52 +0000 (0:00:00.184) 0:00:21.932 ********* 2025-05-14 02:20:52.421170 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:52.421383 | orchestrator | 2025-05-14 02:20:52.422417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:52.425896 | orchestrator | Wednesday 14 May 2025 02:20:52 +0000 (0:00:00.187) 0:00:22.119 ********* 2025-05-14 02:20:52.615053 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:52.615290 | orchestrator | 2025-05-14 02:20:52.615916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:52.616328 | orchestrator | Wednesday 14 May 2025 02:20:52 +0000 (0:00:00.190) 0:00:22.310 ********* 2025-05-14 02:20:52.819168 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:52.820044 | orchestrator | 2025-05-14 02:20:52.820271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:52.820868 | orchestrator | Wednesday 14 May 2025 02:20:52 +0000 (0:00:00.205) 0:00:22.516 ********* 2025-05-14 02:20:53.589330 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-14 02:20:53.589840 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-14 02:20:53.592118 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-14 02:20:53.592296 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-14 02:20:53.593116 | orchestrator | 2025-05-14 02:20:53.593314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:53.593876 | orchestrator | Wednesday 14 May 2025 02:20:53 +0000 (0:00:00.770) 0:00:23.287 ********* 2025-05-14 02:20:54.150629 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:54.151116 | orchestrator | 2025-05-14 02:20:54.153841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:54.154562 | orchestrator | Wednesday 14 May 2025 02:20:54 +0000 (0:00:00.563) 0:00:23.850 ********* 2025-05-14 02:20:54.349774 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:54.352507 | orchestrator | 2025-05-14 02:20:54.352554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:54.353558 | orchestrator | Wednesday 14 May 2025 02:20:54 +0000 (0:00:00.197) 0:00:24.048 ********* 2025-05-14 02:20:54.523133 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:54.523953 | orchestrator | 2025-05-14 02:20:54.524425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:20:54.525139 | orchestrator | Wednesday 14 May 2025 02:20:54 +0000 (0:00:00.174) 0:00:24.223 ********* 2025-05-14 02:20:54.712173 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:54.712525 | orchestrator | 2025-05-14 02:20:54.713985 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-14 02:20:54.715504 | orchestrator | Wednesday 14 May 2025 02:20:54 +0000 (0:00:00.188) 0:00:24.411 ********* 2025-05-14 02:20:54.875073 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-14 02:20:54.875252 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-14 02:20:54.877291 | orchestrator | 2025-05-14 02:20:54.878498 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-14 02:20:54.878524 | orchestrator | Wednesday 14 May 2025 02:20:54 +0000 (0:00:00.162) 0:00:24.574 ********* 2025-05-14 02:20:55.020083 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:55.020219 | orchestrator | 2025-05-14 02:20:55.020451 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-14 02:20:55.021171 | orchestrator | Wednesday 14 May 2025 02:20:55 +0000 (0:00:00.144) 0:00:24.718 ********* 2025-05-14 02:20:55.185752 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:55.185837 | orchestrator | 2025-05-14 02:20:55.185863 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-14 02:20:55.185872 | orchestrator | Wednesday 14 May 2025 02:20:55 +0000 (0:00:00.165) 0:00:24.884 ********* 2025-05-14 02:20:55.319531 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:55.319617 | orchestrator | 2025-05-14 02:20:55.319849 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-14 02:20:55.322332 | orchestrator | Wednesday 14 May 2025 02:20:55 +0000 (0:00:00.135) 0:00:25.019 ********* 2025-05-14 02:20:55.468991 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:20:55.469452 | orchestrator | 2025-05-14 02:20:55.470189 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-14 02:20:55.474266 | orchestrator | Wednesday 14 May 2025 02:20:55 +0000 (0:00:00.148) 0:00:25.168 ********* 2025-05-14 02:20:55.676422 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea3c2360-3d2e-5360-8839-85b817b77bc3'}}) 2025-05-14 02:20:55.676805 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fecac30f-087c-5b0b-83ef-f9d2b642a995'}}) 2025-05-14 02:20:55.677496 | orchestrator | 2025-05-14 02:20:55.678296 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-14 02:20:55.678890 | orchestrator | Wednesday 14 May 2025 02:20:55 +0000 (0:00:00.207) 0:00:25.376 ********* 2025-05-14 02:20:55.841061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea3c2360-3d2e-5360-8839-85b817b77bc3'}})  2025-05-14 02:20:55.841570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fecac30f-087c-5b0b-83ef-f9d2b642a995'}})  2025-05-14 02:20:55.841950 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:55.842256 | orchestrator | 2025-05-14 02:20:55.842917 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-14 02:20:55.842939 | orchestrator | Wednesday 14 May 2025 02:20:55 +0000 (0:00:00.163) 0:00:25.539 ********* 2025-05-14 02:20:56.216982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea3c2360-3d2e-5360-8839-85b817b77bc3'}})  2025-05-14 02:20:56.217161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fecac30f-087c-5b0b-83ef-f9d2b642a995'}})  2025-05-14 02:20:56.218175 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:56.219036 | orchestrator | 2025-05-14 02:20:56.219402 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-14 02:20:56.220134 | orchestrator | Wednesday 14 May 2025 02:20:56 +0000 (0:00:00.376) 0:00:25.915 ********* 2025-05-14 02:20:56.375620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea3c2360-3d2e-5360-8839-85b817b77bc3'}})  2025-05-14 02:20:56.375918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fecac30f-087c-5b0b-83ef-f9d2b642a995'}})  2025-05-14 02:20:56.377203 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:56.377752 | orchestrator | 2025-05-14 02:20:56.379993 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-14 02:20:56.380024 | orchestrator | Wednesday 14 May 2025 02:20:56 +0000 (0:00:00.156) 0:00:26.072 ********* 2025-05-14 02:20:56.533457 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:20:56.534556 | orchestrator | 2025-05-14 02:20:56.535678 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-14 02:20:56.537023 | orchestrator | Wednesday 14 May 2025 02:20:56 +0000 (0:00:00.158) 0:00:26.231 ********* 2025-05-14 02:20:56.691064 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:20:56.691465 | orchestrator | 2025-05-14 02:20:56.693105 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-14 02:20:56.695166 | orchestrator | Wednesday 14 May 2025 02:20:56 +0000 (0:00:00.157) 0:00:26.388 ********* 2025-05-14 02:20:56.833267 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:56.833739 | orchestrator | 2025-05-14 02:20:56.834609 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-14 02:20:56.835367 | orchestrator | Wednesday 14 May 2025 02:20:56 +0000 (0:00:00.142) 0:00:26.531 ********* 2025-05-14 02:20:56.977219 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:56.977319 | orchestrator | 2025-05-14 02:20:56.978295 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-14 02:20:56.979040 | orchestrator | Wednesday 14 May 2025 02:20:56 +0000 (0:00:00.143) 0:00:26.674 ********* 2025-05-14 02:20:57.124369 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:57.125904 | orchestrator | 2025-05-14 02:20:57.125937 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-14 02:20:57.126107 | orchestrator | Wednesday 14 May 2025 02:20:57 +0000 (0:00:00.148) 0:00:26.823 ********* 2025-05-14 02:20:57.276161 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:20:57.276261 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:20:57.276907 | orchestrator |  "sdb": { 2025-05-14 02:20:57.278148 | orchestrator |  "osd_lvm_uuid": "ea3c2360-3d2e-5360-8839-85b817b77bc3" 2025-05-14 02:20:57.278777 | orchestrator |  }, 2025-05-14 02:20:57.282269 | orchestrator |  "sdc": { 2025-05-14 02:20:57.283547 | orchestrator |  "osd_lvm_uuid": "fecac30f-087c-5b0b-83ef-f9d2b642a995" 2025-05-14 02:20:57.284396 | orchestrator |  } 2025-05-14 02:20:57.284984 | orchestrator |  } 2025-05-14 02:20:57.285538 | orchestrator | } 2025-05-14 02:20:57.286066 | orchestrator | 2025-05-14 02:20:57.286630 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-14 02:20:57.287131 | orchestrator | Wednesday 14 May 2025 02:20:57 +0000 (0:00:00.151) 0:00:26.974 ********* 2025-05-14 02:20:57.404655 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:57.406733 | orchestrator | 2025-05-14 02:20:57.407144 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-14 02:20:57.408232 | orchestrator | Wednesday 14 May 2025 02:20:57 +0000 (0:00:00.126) 0:00:27.100 ********* 2025-05-14 02:20:57.545026 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:57.545168 | orchestrator | 2025-05-14 02:20:57.545870 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-14 02:20:57.546517 | orchestrator | Wednesday 14 May 2025 02:20:57 +0000 (0:00:00.143) 0:00:27.244 ********* 2025-05-14 02:20:57.685182 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:20:57.685593 | orchestrator | 2025-05-14 02:20:57.686289 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-14 02:20:57.687193 | orchestrator | Wednesday 14 May 2025 02:20:57 +0000 (0:00:00.138) 0:00:27.383 ********* 2025-05-14 02:20:58.282527 | orchestrator | changed: [testbed-node-4] => { 2025-05-14 02:20:58.283551 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-14 02:20:58.284632 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:20:58.287595 | orchestrator |  "sdb": { 2025-05-14 02:20:58.287756 | orchestrator |  "osd_lvm_uuid": "ea3c2360-3d2e-5360-8839-85b817b77bc3" 2025-05-14 02:20:58.288429 | orchestrator |  }, 2025-05-14 02:20:58.289624 | orchestrator |  "sdc": { 2025-05-14 02:20:58.291100 | orchestrator |  "osd_lvm_uuid": "fecac30f-087c-5b0b-83ef-f9d2b642a995" 2025-05-14 02:20:58.291126 | orchestrator |  } 2025-05-14 02:20:58.291604 | orchestrator |  }, 2025-05-14 02:20:58.291964 | orchestrator |  "lvm_volumes": [ 2025-05-14 02:20:58.292430 | orchestrator |  { 2025-05-14 02:20:58.293300 | orchestrator |  "data": "osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3", 2025-05-14 02:20:58.293646 | orchestrator |  "data_vg": "ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3" 2025-05-14 02:20:58.293860 | orchestrator |  }, 2025-05-14 02:20:58.294874 | orchestrator |  { 2025-05-14 02:20:58.294980 | orchestrator |  "data": "osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995", 2025-05-14 02:20:58.295412 | orchestrator |  "data_vg": "ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995" 2025-05-14 02:20:58.295454 | orchestrator |  } 2025-05-14 02:20:58.295679 | orchestrator |  ] 2025-05-14 02:20:58.296569 | orchestrator |  } 2025-05-14 02:20:58.296631 | orchestrator | } 2025-05-14 02:20:58.296645 | orchestrator | 2025-05-14 02:20:58.296658 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-14 02:20:58.296834 | orchestrator | Wednesday 14 May 2025 02:20:58 +0000 (0:00:00.594) 0:00:27.977 ********* 2025-05-14 02:20:59.696434 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-14 02:20:59.696901 | orchestrator | 2025-05-14 02:20:59.698656 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-14 02:20:59.699084 | orchestrator | 2025-05-14 02:20:59.700702 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:20:59.700724 | orchestrator | Wednesday 14 May 2025 02:20:59 +0000 (0:00:01.417) 0:00:29.395 ********* 2025-05-14 02:20:59.952501 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-14 02:20:59.952675 | orchestrator | 2025-05-14 02:20:59.952968 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:20:59.954110 | orchestrator | Wednesday 14 May 2025 02:20:59 +0000 (0:00:00.253) 0:00:29.648 ********* 2025-05-14 02:21:00.190395 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:21:00.196091 | orchestrator | 2025-05-14 02:21:00.199262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:00.199729 | orchestrator | Wednesday 14 May 2025 02:21:00 +0000 (0:00:00.239) 0:00:29.888 ********* 2025-05-14 02:21:00.953834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-14 02:21:00.957436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-14 02:21:00.957456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-14 02:21:00.958336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-14 02:21:00.959160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-14 02:21:00.959756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-14 02:21:00.960444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-14 02:21:00.960828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-14 02:21:00.961728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-14 02:21:00.961857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-14 02:21:00.962320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-14 02:21:00.962793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-14 02:21:00.963029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-14 02:21:00.963410 | orchestrator | 2025-05-14 02:21:00.964285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:00.964744 | orchestrator | Wednesday 14 May 2025 02:21:00 +0000 (0:00:00.762) 0:00:30.650 ********* 2025-05-14 02:21:01.164941 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:01.165196 | orchestrator | 2025-05-14 02:21:01.165757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:01.166557 | orchestrator | Wednesday 14 May 2025 02:21:01 +0000 (0:00:00.212) 0:00:30.863 ********* 2025-05-14 02:21:01.344292 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:01.344507 | orchestrator | 2025-05-14 02:21:01.345473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:01.346301 | orchestrator | Wednesday 14 May 2025 02:21:01 +0000 (0:00:00.179) 0:00:31.043 ********* 2025-05-14 02:21:01.547596 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:01.548318 | orchestrator | 2025-05-14 02:21:01.548778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:01.549270 | orchestrator | Wednesday 14 May 2025 02:21:01 +0000 (0:00:00.203) 0:00:31.246 ********* 2025-05-14 02:21:01.776227 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:01.776587 | orchestrator | 2025-05-14 02:21:01.777870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:01.778198 | orchestrator | Wednesday 14 May 2025 02:21:01 +0000 (0:00:00.228) 0:00:31.475 ********* 2025-05-14 02:21:02.046058 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:02.046146 | orchestrator | 2025-05-14 02:21:02.046404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:02.046842 | orchestrator | Wednesday 14 May 2025 02:21:02 +0000 (0:00:00.269) 0:00:31.744 ********* 2025-05-14 02:21:02.252734 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:02.253551 | orchestrator | 2025-05-14 02:21:02.254892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:02.255975 | orchestrator | Wednesday 14 May 2025 02:21:02 +0000 (0:00:00.206) 0:00:31.951 ********* 2025-05-14 02:21:02.458392 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:02.458870 | orchestrator | 2025-05-14 02:21:02.459351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:02.459857 | orchestrator | Wednesday 14 May 2025 02:21:02 +0000 (0:00:00.204) 0:00:32.156 ********* 2025-05-14 02:21:02.703220 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:02.703905 | orchestrator | 2025-05-14 02:21:02.704593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:02.704918 | orchestrator | Wednesday 14 May 2025 02:21:02 +0000 (0:00:00.243) 0:00:32.399 ********* 2025-05-14 02:21:03.345537 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53) 2025-05-14 02:21:03.346438 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53) 2025-05-14 02:21:03.347443 | orchestrator | 2025-05-14 02:21:03.348290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:03.349195 | orchestrator | Wednesday 14 May 2025 02:21:03 +0000 (0:00:00.644) 0:00:33.043 ********* 2025-05-14 02:21:04.046941 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7d716f79-cf1d-4cd5-9251-d30dd616fe8c) 2025-05-14 02:21:04.047041 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7d716f79-cf1d-4cd5-9251-d30dd616fe8c) 2025-05-14 02:21:04.048140 | orchestrator | 2025-05-14 02:21:04.049055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:04.049875 | orchestrator | Wednesday 14 May 2025 02:21:04 +0000 (0:00:00.700) 0:00:33.744 ********* 2025-05-14 02:21:04.507147 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_276d5307-5ea7-4279-8794-03223ea8507b) 2025-05-14 02:21:04.507617 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_276d5307-5ea7-4279-8794-03223ea8507b) 2025-05-14 02:21:04.508513 | orchestrator | 2025-05-14 02:21:04.509379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:04.510106 | orchestrator | Wednesday 14 May 2025 02:21:04 +0000 (0:00:00.457) 0:00:34.201 ********* 2025-05-14 02:21:04.929762 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_07a08b1a-3bd9-437e-a737-9a0e3fc440bf) 2025-05-14 02:21:04.929998 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_07a08b1a-3bd9-437e-a737-9a0e3fc440bf) 2025-05-14 02:21:04.931071 | orchestrator | 2025-05-14 02:21:04.931958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:04.932628 | orchestrator | Wednesday 14 May 2025 02:21:04 +0000 (0:00:00.426) 0:00:34.627 ********* 2025-05-14 02:21:05.277891 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:21:05.278162 | orchestrator | 2025-05-14 02:21:05.279055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:05.279645 | orchestrator | Wednesday 14 May 2025 02:21:05 +0000 (0:00:00.349) 0:00:34.977 ********* 2025-05-14 02:21:05.679715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-14 02:21:05.679969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-14 02:21:05.682944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-14 02:21:05.682998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-14 02:21:05.683107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-14 02:21:05.683975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-14 02:21:05.684305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-14 02:21:05.685319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-14 02:21:05.686909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-14 02:21:05.687554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-14 02:21:05.688802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-14 02:21:05.689613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-14 02:21:05.690731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-14 02:21:05.691095 | orchestrator | 2025-05-14 02:21:05.692358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:05.692411 | orchestrator | Wednesday 14 May 2025 02:21:05 +0000 (0:00:00.399) 0:00:35.377 ********* 2025-05-14 02:21:05.878927 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:05.880259 | orchestrator | 2025-05-14 02:21:05.882098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:05.883745 | orchestrator | Wednesday 14 May 2025 02:21:05 +0000 (0:00:00.199) 0:00:35.577 ********* 2025-05-14 02:21:06.092315 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:06.092566 | orchestrator | 2025-05-14 02:21:06.093818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:06.094571 | orchestrator | Wednesday 14 May 2025 02:21:06 +0000 (0:00:00.213) 0:00:35.790 ********* 2025-05-14 02:21:06.315457 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:06.316641 | orchestrator | 2025-05-14 02:21:06.319103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:06.319956 | orchestrator | Wednesday 14 May 2025 02:21:06 +0000 (0:00:00.223) 0:00:36.013 ********* 2025-05-14 02:21:06.532822 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:06.533539 | orchestrator | 2025-05-14 02:21:06.534186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:06.536615 | orchestrator | Wednesday 14 May 2025 02:21:06 +0000 (0:00:00.215) 0:00:36.229 ********* 2025-05-14 02:21:07.167983 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:07.168118 | orchestrator | 2025-05-14 02:21:07.168147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:07.168454 | orchestrator | Wednesday 14 May 2025 02:21:07 +0000 (0:00:00.636) 0:00:36.865 ********* 2025-05-14 02:21:07.366940 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:07.368504 | orchestrator | 2025-05-14 02:21:07.368544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:07.369865 | orchestrator | Wednesday 14 May 2025 02:21:07 +0000 (0:00:00.198) 0:00:37.064 ********* 2025-05-14 02:21:07.566265 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:07.567873 | orchestrator | 2025-05-14 02:21:07.569364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:07.569581 | orchestrator | Wednesday 14 May 2025 02:21:07 +0000 (0:00:00.199) 0:00:37.263 ********* 2025-05-14 02:21:07.781742 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:07.782941 | orchestrator | 2025-05-14 02:21:07.785175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:07.786150 | orchestrator | Wednesday 14 May 2025 02:21:07 +0000 (0:00:00.216) 0:00:37.480 ********* 2025-05-14 02:21:08.429818 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-14 02:21:08.429968 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-14 02:21:08.433411 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-14 02:21:08.433510 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-14 02:21:08.433577 | orchestrator | 2025-05-14 02:21:08.434603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:08.435262 | orchestrator | Wednesday 14 May 2025 02:21:08 +0000 (0:00:00.646) 0:00:38.126 ********* 2025-05-14 02:21:08.629579 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:08.630138 | orchestrator | 2025-05-14 02:21:08.630871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:08.631448 | orchestrator | Wednesday 14 May 2025 02:21:08 +0000 (0:00:00.200) 0:00:38.327 ********* 2025-05-14 02:21:08.835379 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:08.836522 | orchestrator | 2025-05-14 02:21:08.837155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:08.837763 | orchestrator | Wednesday 14 May 2025 02:21:08 +0000 (0:00:00.206) 0:00:38.533 ********* 2025-05-14 02:21:09.043399 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:09.043842 | orchestrator | 2025-05-14 02:21:09.043930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:09.044088 | orchestrator | Wednesday 14 May 2025 02:21:09 +0000 (0:00:00.208) 0:00:38.742 ********* 2025-05-14 02:21:09.255976 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:09.256070 | orchestrator | 2025-05-14 02:21:09.256342 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-14 02:21:09.256877 | orchestrator | Wednesday 14 May 2025 02:21:09 +0000 (0:00:00.213) 0:00:38.955 ********* 2025-05-14 02:21:09.459336 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-14 02:21:09.460873 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-14 02:21:09.461235 | orchestrator | 2025-05-14 02:21:09.461915 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-14 02:21:09.462725 | orchestrator | Wednesday 14 May 2025 02:21:09 +0000 (0:00:00.203) 0:00:39.158 ********* 2025-05-14 02:21:09.804732 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:09.804833 | orchestrator | 2025-05-14 02:21:09.805096 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-14 02:21:09.805676 | orchestrator | Wednesday 14 May 2025 02:21:09 +0000 (0:00:00.340) 0:00:39.498 ********* 2025-05-14 02:21:09.937559 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:09.937880 | orchestrator | 2025-05-14 02:21:09.938929 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-14 02:21:09.941339 | orchestrator | Wednesday 14 May 2025 02:21:09 +0000 (0:00:00.137) 0:00:39.635 ********* 2025-05-14 02:21:10.094009 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:10.094826 | orchestrator | 2025-05-14 02:21:10.096128 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-14 02:21:10.097029 | orchestrator | Wednesday 14 May 2025 02:21:10 +0000 (0:00:00.156) 0:00:39.792 ********* 2025-05-14 02:21:10.247362 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:21:10.247656 | orchestrator | 2025-05-14 02:21:10.248206 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-14 02:21:10.248777 | orchestrator | Wednesday 14 May 2025 02:21:10 +0000 (0:00:00.153) 0:00:39.946 ********* 2025-05-14 02:21:10.432852 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '03d77871-dede-5752-b4dd-afb6f86d8bca'}}) 2025-05-14 02:21:10.433657 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0c7e27ae-f126-51b5-99e7-7e9908cad598'}}) 2025-05-14 02:21:10.434286 | orchestrator | 2025-05-14 02:21:10.435605 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-14 02:21:10.436553 | orchestrator | Wednesday 14 May 2025 02:21:10 +0000 (0:00:00.184) 0:00:40.130 ********* 2025-05-14 02:21:10.624544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '03d77871-dede-5752-b4dd-afb6f86d8bca'}})  2025-05-14 02:21:10.626913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0c7e27ae-f126-51b5-99e7-7e9908cad598'}})  2025-05-14 02:21:10.627396 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:10.628087 | orchestrator | 2025-05-14 02:21:10.628897 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-14 02:21:10.629774 | orchestrator | Wednesday 14 May 2025 02:21:10 +0000 (0:00:00.190) 0:00:40.321 ********* 2025-05-14 02:21:10.804335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '03d77871-dede-5752-b4dd-afb6f86d8bca'}})  2025-05-14 02:21:10.805420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0c7e27ae-f126-51b5-99e7-7e9908cad598'}})  2025-05-14 02:21:10.805994 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:10.806533 | orchestrator | 2025-05-14 02:21:10.807143 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-14 02:21:10.807677 | orchestrator | Wednesday 14 May 2025 02:21:10 +0000 (0:00:00.182) 0:00:40.503 ********* 2025-05-14 02:21:10.985810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '03d77871-dede-5752-b4dd-afb6f86d8bca'}})  2025-05-14 02:21:10.985882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0c7e27ae-f126-51b5-99e7-7e9908cad598'}})  2025-05-14 02:21:10.988842 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:10.993406 | orchestrator | 2025-05-14 02:21:10.993792 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-14 02:21:10.994260 | orchestrator | Wednesday 14 May 2025 02:21:10 +0000 (0:00:00.178) 0:00:40.681 ********* 2025-05-14 02:21:11.129994 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:21:11.130922 | orchestrator | 2025-05-14 02:21:11.131350 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-14 02:21:11.132000 | orchestrator | Wednesday 14 May 2025 02:21:11 +0000 (0:00:00.147) 0:00:40.829 ********* 2025-05-14 02:21:11.297399 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:21:11.297654 | orchestrator | 2025-05-14 02:21:11.298129 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-14 02:21:11.298175 | orchestrator | Wednesday 14 May 2025 02:21:11 +0000 (0:00:00.166) 0:00:40.995 ********* 2025-05-14 02:21:11.455288 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:11.455938 | orchestrator | 2025-05-14 02:21:11.456344 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-14 02:21:11.457461 | orchestrator | Wednesday 14 May 2025 02:21:11 +0000 (0:00:00.158) 0:00:41.153 ********* 2025-05-14 02:21:11.598574 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:11.598669 | orchestrator | 2025-05-14 02:21:11.599570 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-14 02:21:11.600816 | orchestrator | Wednesday 14 May 2025 02:21:11 +0000 (0:00:00.141) 0:00:41.295 ********* 2025-05-14 02:21:11.972008 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:11.972193 | orchestrator | 2025-05-14 02:21:11.973544 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-14 02:21:11.974306 | orchestrator | Wednesday 14 May 2025 02:21:11 +0000 (0:00:00.374) 0:00:41.670 ********* 2025-05-14 02:21:12.124610 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:21:12.125393 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:21:12.126614 | orchestrator |  "sdb": { 2025-05-14 02:21:12.129090 | orchestrator |  "osd_lvm_uuid": "03d77871-dede-5752-b4dd-afb6f86d8bca" 2025-05-14 02:21:12.129116 | orchestrator |  }, 2025-05-14 02:21:12.129508 | orchestrator |  "sdc": { 2025-05-14 02:21:12.130330 | orchestrator |  "osd_lvm_uuid": "0c7e27ae-f126-51b5-99e7-7e9908cad598" 2025-05-14 02:21:12.130825 | orchestrator |  } 2025-05-14 02:21:12.131166 | orchestrator |  } 2025-05-14 02:21:12.131533 | orchestrator | } 2025-05-14 02:21:12.131946 | orchestrator | 2025-05-14 02:21:12.132401 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-14 02:21:12.132754 | orchestrator | Wednesday 14 May 2025 02:21:12 +0000 (0:00:00.152) 0:00:41.823 ********* 2025-05-14 02:21:12.266818 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:12.266942 | orchestrator | 2025-05-14 02:21:12.267293 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-14 02:21:12.267640 | orchestrator | Wednesday 14 May 2025 02:21:12 +0000 (0:00:00.142) 0:00:41.965 ********* 2025-05-14 02:21:12.448363 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:12.450311 | orchestrator | 2025-05-14 02:21:12.450911 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-14 02:21:12.451554 | orchestrator | Wednesday 14 May 2025 02:21:12 +0000 (0:00:00.179) 0:00:42.145 ********* 2025-05-14 02:21:12.605512 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:12.606783 | orchestrator | 2025-05-14 02:21:12.608073 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-14 02:21:12.608602 | orchestrator | Wednesday 14 May 2025 02:21:12 +0000 (0:00:00.156) 0:00:42.302 ********* 2025-05-14 02:21:12.874802 | orchestrator | changed: [testbed-node-5] => { 2025-05-14 02:21:12.875426 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-14 02:21:12.876653 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:21:12.877512 | orchestrator |  "sdb": { 2025-05-14 02:21:12.878781 | orchestrator |  "osd_lvm_uuid": "03d77871-dede-5752-b4dd-afb6f86d8bca" 2025-05-14 02:21:12.879405 | orchestrator |  }, 2025-05-14 02:21:12.880393 | orchestrator |  "sdc": { 2025-05-14 02:21:12.881451 | orchestrator |  "osd_lvm_uuid": "0c7e27ae-f126-51b5-99e7-7e9908cad598" 2025-05-14 02:21:12.882379 | orchestrator |  } 2025-05-14 02:21:12.882604 | orchestrator |  }, 2025-05-14 02:21:12.883672 | orchestrator |  "lvm_volumes": [ 2025-05-14 02:21:12.884243 | orchestrator |  { 2025-05-14 02:21:12.884890 | orchestrator |  "data": "osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca", 2025-05-14 02:21:12.885782 | orchestrator |  "data_vg": "ceph-03d77871-dede-5752-b4dd-afb6f86d8bca" 2025-05-14 02:21:12.886006 | orchestrator |  }, 2025-05-14 02:21:12.887223 | orchestrator |  { 2025-05-14 02:21:12.887409 | orchestrator |  "data": "osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598", 2025-05-14 02:21:12.888105 | orchestrator |  "data_vg": "ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598" 2025-05-14 02:21:12.888827 | orchestrator |  } 2025-05-14 02:21:12.888888 | orchestrator |  ] 2025-05-14 02:21:12.889384 | orchestrator |  } 2025-05-14 02:21:12.889962 | orchestrator | } 2025-05-14 02:21:12.890376 | orchestrator | 2025-05-14 02:21:12.890852 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-14 02:21:12.891065 | orchestrator | Wednesday 14 May 2025 02:21:12 +0000 (0:00:00.270) 0:00:42.572 ********* 2025-05-14 02:21:13.987436 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-14 02:21:13.989905 | orchestrator | 2025-05-14 02:21:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:21:13.989955 | orchestrator | 2025-05-14 02:21:13 | INFO  | Please wait and do not abort execution. 2025-05-14 02:21:13.990415 | orchestrator | 2025-05-14 02:21:13.990441 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:21:13.990727 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 02:21:13.991737 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 02:21:13.992572 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 02:21:13.993360 | orchestrator | 2025-05-14 02:21:13.993969 | orchestrator | 2025-05-14 02:21:13.995765 | orchestrator | 2025-05-14 02:21:13.996974 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:21:14.000515 | orchestrator | Wednesday 14 May 2025 02:21:13 +0000 (0:00:01.111) 0:00:43.684 ********* 2025-05-14 02:21:14.000539 | orchestrator | =============================================================================== 2025-05-14 02:21:14.000551 | orchestrator | Write configuration file ------------------------------------------------ 4.55s 2025-05-14 02:21:14.001526 | orchestrator | Add known links to the list of available block devices ------------------ 1.69s 2025-05-14 02:21:14.001848 | orchestrator | Add known partitions to the list of available block devices ------------- 1.36s 2025-05-14 02:21:14.002720 | orchestrator | Print configuration data ------------------------------------------------ 1.12s 2025-05-14 02:21:14.003523 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2025-05-14 02:21:14.004088 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-05-14 02:21:14.004829 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-05-14 02:21:14.005183 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.74s 2025-05-14 02:21:14.006717 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-05-14 02:21:14.007343 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-05-14 02:21:14.008275 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.70s 2025-05-14 02:21:14.009982 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2025-05-14 02:21:14.010151 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.69s 2025-05-14 02:21:14.010182 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.69s 2025-05-14 02:21:14.010353 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.68s 2025-05-14 02:21:14.010500 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-05-14 02:21:14.010862 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-14 02:21:14.011739 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-14 02:21:14.012410 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-05-14 02:21:14.012907 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-14 02:21:26.285277 | orchestrator | 2025-05-14 02:21:26 | INFO  | Task fc8289e7-a306-43da-b035-e2a5dc322a4f is running in background. Output coming soon. 2025-05-14 02:22:04.526327 | orchestrator | 2025-05-14 02:21:55 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-14 02:22:04.526427 | orchestrator | 2025-05-14 02:21:55 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-14 02:22:04.526453 | orchestrator | 2025-05-14 02:21:55 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-14 02:22:04.526467 | orchestrator | 2025-05-14 02:21:56 | INFO  | Handling group overwrites in 99-overwrite 2025-05-14 02:22:04.526479 | orchestrator | 2025-05-14 02:21:56 | INFO  | Removing group frr:children from 60-generic 2025-05-14 02:22:04.526490 | orchestrator | 2025-05-14 02:21:56 | INFO  | Removing group storage:children from 50-kolla 2025-05-14 02:22:04.526502 | orchestrator | 2025-05-14 02:21:56 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-14 02:22:04.526513 | orchestrator | 2025-05-14 02:21:56 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-14 02:22:04.526525 | orchestrator | 2025-05-14 02:21:56 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-14 02:22:04.526536 | orchestrator | 2025-05-14 02:21:56 | INFO  | Handling group overwrites in 20-roles 2025-05-14 02:22:04.526547 | orchestrator | 2025-05-14 02:21:56 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-14 02:22:04.526558 | orchestrator | 2025-05-14 02:21:56 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-14 02:22:04.526570 | orchestrator | 2025-05-14 02:22:04 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-14 02:22:06.194378 | orchestrator | 2025-05-14 02:22:06 | INFO  | Task 8fe852d9-ca49-4aaa-91fb-66e8aaa2f79e (ceph-create-lvm-devices) was prepared for execution. 2025-05-14 02:22:06.194838 | orchestrator | 2025-05-14 02:22:06 | INFO  | It takes a moment until task 8fe852d9-ca49-4aaa-91fb-66e8aaa2f79e (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-14 02:22:08.912248 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:22:09.397116 | orchestrator | 2025-05-14 02:22:09.398228 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-14 02:22:09.398745 | orchestrator | 2025-05-14 02:22:09.399823 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:22:09.400631 | orchestrator | Wednesday 14 May 2025 02:22:09 +0000 (0:00:00.420) 0:00:00.420 ********* 2025-05-14 02:22:09.638166 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 02:22:09.638865 | orchestrator | 2025-05-14 02:22:09.643068 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:22:09.644149 | orchestrator | Wednesday 14 May 2025 02:22:09 +0000 (0:00:00.241) 0:00:00.662 ********* 2025-05-14 02:22:09.879034 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:22:09.880536 | orchestrator | 2025-05-14 02:22:09.881170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:09.882836 | orchestrator | Wednesday 14 May 2025 02:22:09 +0000 (0:00:00.241) 0:00:00.903 ********* 2025-05-14 02:22:10.607498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-14 02:22:10.608608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-14 02:22:10.609404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-14 02:22:10.610797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-14 02:22:10.611509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-14 02:22:10.612153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-14 02:22:10.612962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-14 02:22:10.614332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-14 02:22:10.615537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-14 02:22:10.616086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-14 02:22:10.616638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-14 02:22:10.617233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-14 02:22:10.618093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-14 02:22:10.618763 | orchestrator | 2025-05-14 02:22:10.619819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:10.620292 | orchestrator | Wednesday 14 May 2025 02:22:10 +0000 (0:00:00.727) 0:00:01.631 ********* 2025-05-14 02:22:10.808330 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:10.808768 | orchestrator | 2025-05-14 02:22:10.809838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:10.810364 | orchestrator | Wednesday 14 May 2025 02:22:10 +0000 (0:00:00.202) 0:00:01.833 ********* 2025-05-14 02:22:11.014769 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:11.014843 | orchestrator | 2025-05-14 02:22:11.015430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:11.016363 | orchestrator | Wednesday 14 May 2025 02:22:11 +0000 (0:00:00.201) 0:00:02.034 ********* 2025-05-14 02:22:11.211244 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:11.211345 | orchestrator | 2025-05-14 02:22:11.211359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:11.211372 | orchestrator | Wednesday 14 May 2025 02:22:11 +0000 (0:00:00.199) 0:00:02.234 ********* 2025-05-14 02:22:11.399173 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:11.399640 | orchestrator | 2025-05-14 02:22:11.400126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:11.402801 | orchestrator | Wednesday 14 May 2025 02:22:11 +0000 (0:00:00.189) 0:00:02.423 ********* 2025-05-14 02:22:11.627286 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:11.627460 | orchestrator | 2025-05-14 02:22:11.628090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:11.628588 | orchestrator | Wednesday 14 May 2025 02:22:11 +0000 (0:00:00.229) 0:00:02.652 ********* 2025-05-14 02:22:11.853728 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:11.854962 | orchestrator | 2025-05-14 02:22:11.856261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:11.857656 | orchestrator | Wednesday 14 May 2025 02:22:11 +0000 (0:00:00.225) 0:00:02.878 ********* 2025-05-14 02:22:12.062589 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:12.063060 | orchestrator | 2025-05-14 02:22:12.064438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:12.065194 | orchestrator | Wednesday 14 May 2025 02:22:12 +0000 (0:00:00.207) 0:00:03.086 ********* 2025-05-14 02:22:12.261821 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:12.263198 | orchestrator | 2025-05-14 02:22:12.263987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:12.264871 | orchestrator | Wednesday 14 May 2025 02:22:12 +0000 (0:00:00.200) 0:00:03.286 ********* 2025-05-14 02:22:12.906509 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1) 2025-05-14 02:22:12.908131 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1) 2025-05-14 02:22:12.910612 | orchestrator | 2025-05-14 02:22:12.911843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:12.912311 | orchestrator | Wednesday 14 May 2025 02:22:12 +0000 (0:00:00.643) 0:00:03.930 ********* 2025-05-14 02:22:13.720588 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6c9e420d-0c60-4ebc-ac19-f905b2b7a82f) 2025-05-14 02:22:13.720797 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6c9e420d-0c60-4ebc-ac19-f905b2b7a82f) 2025-05-14 02:22:13.721947 | orchestrator | 2025-05-14 02:22:13.725042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:13.725346 | orchestrator | Wednesday 14 May 2025 02:22:13 +0000 (0:00:00.814) 0:00:04.745 ********* 2025-05-14 02:22:14.172426 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7c39c8ea-7878-4e89-b4ec-61bbe868aea7) 2025-05-14 02:22:14.172982 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7c39c8ea-7878-4e89-b4ec-61bbe868aea7) 2025-05-14 02:22:14.173791 | orchestrator | 2025-05-14 02:22:14.175263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:14.175953 | orchestrator | Wednesday 14 May 2025 02:22:14 +0000 (0:00:00.451) 0:00:05.196 ********* 2025-05-14 02:22:14.610778 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e31a2ff7-84d9-48c9-b0e1-1526f23b46b1) 2025-05-14 02:22:14.611871 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e31a2ff7-84d9-48c9-b0e1-1526f23b46b1) 2025-05-14 02:22:14.612442 | orchestrator | 2025-05-14 02:22:14.613426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:14.614469 | orchestrator | Wednesday 14 May 2025 02:22:14 +0000 (0:00:00.438) 0:00:05.635 ********* 2025-05-14 02:22:14.947258 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:22:14.947610 | orchestrator | 2025-05-14 02:22:14.948914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:14.950163 | orchestrator | Wednesday 14 May 2025 02:22:14 +0000 (0:00:00.336) 0:00:05.971 ********* 2025-05-14 02:22:15.441279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-14 02:22:15.441406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-14 02:22:15.441807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-14 02:22:15.442940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-14 02:22:15.443805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-14 02:22:15.444994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-14 02:22:15.445617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-14 02:22:15.446140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-14 02:22:15.446743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-14 02:22:15.447173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-14 02:22:15.448155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-14 02:22:15.448388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-14 02:22:15.448740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-14 02:22:15.449163 | orchestrator | 2025-05-14 02:22:15.449589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:15.449879 | orchestrator | Wednesday 14 May 2025 02:22:15 +0000 (0:00:00.493) 0:00:06.464 ********* 2025-05-14 02:22:15.641054 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:15.641179 | orchestrator | 2025-05-14 02:22:15.641355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:15.642305 | orchestrator | Wednesday 14 May 2025 02:22:15 +0000 (0:00:00.199) 0:00:06.664 ********* 2025-05-14 02:22:15.834540 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:15.834756 | orchestrator | 2025-05-14 02:22:15.835725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:15.836225 | orchestrator | Wednesday 14 May 2025 02:22:15 +0000 (0:00:00.194) 0:00:06.859 ********* 2025-05-14 02:22:16.069815 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:16.070151 | orchestrator | 2025-05-14 02:22:16.072298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:16.072817 | orchestrator | Wednesday 14 May 2025 02:22:16 +0000 (0:00:00.234) 0:00:07.094 ********* 2025-05-14 02:22:16.301939 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:16.302183 | orchestrator | 2025-05-14 02:22:16.303126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:16.303579 | orchestrator | Wednesday 14 May 2025 02:22:16 +0000 (0:00:00.231) 0:00:07.326 ********* 2025-05-14 02:22:16.897002 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:16.897597 | orchestrator | 2025-05-14 02:22:16.898231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:16.899009 | orchestrator | Wednesday 14 May 2025 02:22:16 +0000 (0:00:00.596) 0:00:07.922 ********* 2025-05-14 02:22:17.120099 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:17.121073 | orchestrator | 2025-05-14 02:22:17.121448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:17.121736 | orchestrator | Wednesday 14 May 2025 02:22:17 +0000 (0:00:00.222) 0:00:08.144 ********* 2025-05-14 02:22:17.318648 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:17.319012 | orchestrator | 2025-05-14 02:22:17.319887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:17.320244 | orchestrator | Wednesday 14 May 2025 02:22:17 +0000 (0:00:00.198) 0:00:08.343 ********* 2025-05-14 02:22:17.526252 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:17.526869 | orchestrator | 2025-05-14 02:22:17.527821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:17.528332 | orchestrator | Wednesday 14 May 2025 02:22:17 +0000 (0:00:00.207) 0:00:08.551 ********* 2025-05-14 02:22:18.214486 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-14 02:22:18.214633 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-14 02:22:18.214740 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-14 02:22:18.215716 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-14 02:22:18.216346 | orchestrator | 2025-05-14 02:22:18.217403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:18.219375 | orchestrator | Wednesday 14 May 2025 02:22:18 +0000 (0:00:00.683) 0:00:09.234 ********* 2025-05-14 02:22:18.427414 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:18.427685 | orchestrator | 2025-05-14 02:22:18.429182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:18.429510 | orchestrator | Wednesday 14 May 2025 02:22:18 +0000 (0:00:00.218) 0:00:09.453 ********* 2025-05-14 02:22:18.636153 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:18.636808 | orchestrator | 2025-05-14 02:22:18.639620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:18.642792 | orchestrator | Wednesday 14 May 2025 02:22:18 +0000 (0:00:00.208) 0:00:09.661 ********* 2025-05-14 02:22:18.847486 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:18.847566 | orchestrator | 2025-05-14 02:22:18.847858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:18.849035 | orchestrator | Wednesday 14 May 2025 02:22:18 +0000 (0:00:00.210) 0:00:09.872 ********* 2025-05-14 02:22:19.064647 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:19.064852 | orchestrator | 2025-05-14 02:22:19.066148 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-14 02:22:19.066859 | orchestrator | Wednesday 14 May 2025 02:22:19 +0000 (0:00:00.216) 0:00:10.089 ********* 2025-05-14 02:22:19.196104 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:19.197348 | orchestrator | 2025-05-14 02:22:19.202057 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-14 02:22:19.202068 | orchestrator | Wednesday 14 May 2025 02:22:19 +0000 (0:00:00.130) 0:00:10.219 ********* 2025-05-14 02:22:19.398792 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'caf94b5f-07a0-5316-9d7c-8f668ab64c5d'}}) 2025-05-14 02:22:19.399152 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0a91196-50f5-599a-8231-3d981ca1eca9'}}) 2025-05-14 02:22:19.400226 | orchestrator | 2025-05-14 02:22:19.401084 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-14 02:22:19.403416 | orchestrator | Wednesday 14 May 2025 02:22:19 +0000 (0:00:00.203) 0:00:10.423 ********* 2025-05-14 02:22:21.553995 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'}) 2025-05-14 02:22:21.554215 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'}) 2025-05-14 02:22:21.556605 | orchestrator | 2025-05-14 02:22:21.557329 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-14 02:22:21.558101 | orchestrator | Wednesday 14 May 2025 02:22:21 +0000 (0:00:02.153) 0:00:12.577 ********* 2025-05-14 02:22:21.717487 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:21.718821 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:21.719018 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:21.721867 | orchestrator | 2025-05-14 02:22:21.721896 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-14 02:22:21.721981 | orchestrator | Wednesday 14 May 2025 02:22:21 +0000 (0:00:00.165) 0:00:12.742 ********* 2025-05-14 02:22:23.178315 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'}) 2025-05-14 02:22:23.180317 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'}) 2025-05-14 02:22:23.180449 | orchestrator | 2025-05-14 02:22:23.180464 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-14 02:22:23.180477 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:01.459) 0:00:14.201 ********* 2025-05-14 02:22:23.342338 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:23.343462 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:23.344080 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:23.345100 | orchestrator | 2025-05-14 02:22:23.347530 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-14 02:22:23.347625 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.165) 0:00:14.367 ********* 2025-05-14 02:22:23.490666 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:23.490896 | orchestrator | 2025-05-14 02:22:23.491133 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-14 02:22:23.492131 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.148) 0:00:14.515 ********* 2025-05-14 02:22:23.661827 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:23.662658 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:23.664217 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:23.664240 | orchestrator | 2025-05-14 02:22:23.664567 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-14 02:22:23.665293 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.169) 0:00:14.685 ********* 2025-05-14 02:22:23.814461 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:23.814735 | orchestrator | 2025-05-14 02:22:23.815522 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-14 02:22:23.815814 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.148) 0:00:14.833 ********* 2025-05-14 02:22:23.983245 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:23.984155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:23.984791 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:23.986115 | orchestrator | 2025-05-14 02:22:23.987101 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-14 02:22:23.987683 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.173) 0:00:15.006 ********* 2025-05-14 02:22:24.284099 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:24.284894 | orchestrator | 2025-05-14 02:22:24.285561 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-14 02:22:24.286670 | orchestrator | Wednesday 14 May 2025 02:22:24 +0000 (0:00:00.301) 0:00:15.308 ********* 2025-05-14 02:22:24.442783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:24.442878 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:24.442926 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:24.443380 | orchestrator | 2025-05-14 02:22:24.443661 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-14 02:22:24.443991 | orchestrator | Wednesday 14 May 2025 02:22:24 +0000 (0:00:00.158) 0:00:15.467 ********* 2025-05-14 02:22:24.577943 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:22:24.578840 | orchestrator | 2025-05-14 02:22:24.579025 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-14 02:22:24.579864 | orchestrator | Wednesday 14 May 2025 02:22:24 +0000 (0:00:00.135) 0:00:15.602 ********* 2025-05-14 02:22:24.754769 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:24.754877 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:24.755353 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:24.756202 | orchestrator | 2025-05-14 02:22:24.756670 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-14 02:22:24.757273 | orchestrator | Wednesday 14 May 2025 02:22:24 +0000 (0:00:00.175) 0:00:15.778 ********* 2025-05-14 02:22:24.933353 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:24.934514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:24.935981 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:24.936070 | orchestrator | 2025-05-14 02:22:24.936986 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-14 02:22:24.937782 | orchestrator | Wednesday 14 May 2025 02:22:24 +0000 (0:00:00.178) 0:00:15.957 ********* 2025-05-14 02:22:25.108612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:25.108770 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:25.109000 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:25.110324 | orchestrator | 2025-05-14 02:22:25.111331 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-14 02:22:25.112289 | orchestrator | Wednesday 14 May 2025 02:22:25 +0000 (0:00:00.175) 0:00:16.132 ********* 2025-05-14 02:22:25.244565 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:25.245014 | orchestrator | 2025-05-14 02:22:25.245773 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-14 02:22:25.246535 | orchestrator | Wednesday 14 May 2025 02:22:25 +0000 (0:00:00.136) 0:00:16.268 ********* 2025-05-14 02:22:25.393876 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:25.393969 | orchestrator | 2025-05-14 02:22:25.394371 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-14 02:22:25.395355 | orchestrator | Wednesday 14 May 2025 02:22:25 +0000 (0:00:00.148) 0:00:16.417 ********* 2025-05-14 02:22:25.548325 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:25.548487 | orchestrator | 2025-05-14 02:22:25.549517 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-14 02:22:25.550260 | orchestrator | Wednesday 14 May 2025 02:22:25 +0000 (0:00:00.155) 0:00:16.572 ********* 2025-05-14 02:22:25.699973 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:22:25.701781 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-14 02:22:25.701834 | orchestrator | } 2025-05-14 02:22:25.702848 | orchestrator | 2025-05-14 02:22:25.703862 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-14 02:22:25.704664 | orchestrator | Wednesday 14 May 2025 02:22:25 +0000 (0:00:00.149) 0:00:16.722 ********* 2025-05-14 02:22:25.845370 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:22:25.845477 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-14 02:22:25.846851 | orchestrator | } 2025-05-14 02:22:25.847523 | orchestrator | 2025-05-14 02:22:25.849618 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-14 02:22:25.850239 | orchestrator | Wednesday 14 May 2025 02:22:25 +0000 (0:00:00.147) 0:00:16.870 ********* 2025-05-14 02:22:26.012305 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:22:26.012557 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-14 02:22:26.014205 | orchestrator | } 2025-05-14 02:22:26.016201 | orchestrator | 2025-05-14 02:22:26.016245 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-14 02:22:26.016259 | orchestrator | Wednesday 14 May 2025 02:22:26 +0000 (0:00:00.166) 0:00:17.036 ********* 2025-05-14 02:22:27.136086 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:22:27.136259 | orchestrator | 2025-05-14 02:22:27.137176 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-14 02:22:27.139416 | orchestrator | Wednesday 14 May 2025 02:22:27 +0000 (0:00:01.122) 0:00:18.159 ********* 2025-05-14 02:22:27.678128 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:22:27.679831 | orchestrator | 2025-05-14 02:22:27.679873 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-14 02:22:27.680316 | orchestrator | Wednesday 14 May 2025 02:22:27 +0000 (0:00:00.543) 0:00:18.703 ********* 2025-05-14 02:22:28.193226 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:22:28.193791 | orchestrator | 2025-05-14 02:22:28.194146 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-14 02:22:28.195214 | orchestrator | Wednesday 14 May 2025 02:22:28 +0000 (0:00:00.515) 0:00:19.218 ********* 2025-05-14 02:22:28.353210 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:22:28.357110 | orchestrator | 2025-05-14 02:22:28.357233 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-14 02:22:28.357251 | orchestrator | Wednesday 14 May 2025 02:22:28 +0000 (0:00:00.160) 0:00:19.379 ********* 2025-05-14 02:22:28.467225 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:28.467901 | orchestrator | 2025-05-14 02:22:28.469170 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-14 02:22:28.470539 | orchestrator | Wednesday 14 May 2025 02:22:28 +0000 (0:00:00.112) 0:00:19.491 ********* 2025-05-14 02:22:28.571228 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:28.571351 | orchestrator | 2025-05-14 02:22:28.571923 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-14 02:22:28.572615 | orchestrator | Wednesday 14 May 2025 02:22:28 +0000 (0:00:00.104) 0:00:19.596 ********* 2025-05-14 02:22:28.714822 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:22:28.716224 | orchestrator |  "vgs_report": { 2025-05-14 02:22:28.716846 | orchestrator |  "vg": [] 2025-05-14 02:22:28.722304 | orchestrator |  } 2025-05-14 02:22:28.722785 | orchestrator | } 2025-05-14 02:22:28.723352 | orchestrator | 2025-05-14 02:22:28.724399 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-14 02:22:28.724972 | orchestrator | Wednesday 14 May 2025 02:22:28 +0000 (0:00:00.140) 0:00:19.736 ********* 2025-05-14 02:22:28.863794 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:28.864692 | orchestrator | 2025-05-14 02:22:28.866395 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-14 02:22:28.868317 | orchestrator | Wednesday 14 May 2025 02:22:28 +0000 (0:00:00.151) 0:00:19.888 ********* 2025-05-14 02:22:29.002405 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:29.003506 | orchestrator | 2025-05-14 02:22:29.008364 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-14 02:22:29.008462 | orchestrator | Wednesday 14 May 2025 02:22:28 +0000 (0:00:00.138) 0:00:20.027 ********* 2025-05-14 02:22:29.142639 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:29.143480 | orchestrator | 2025-05-14 02:22:29.145376 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-14 02:22:29.146524 | orchestrator | Wednesday 14 May 2025 02:22:29 +0000 (0:00:00.139) 0:00:20.166 ********* 2025-05-14 02:22:29.282401 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:29.283462 | orchestrator | 2025-05-14 02:22:29.284454 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-14 02:22:29.285025 | orchestrator | Wednesday 14 May 2025 02:22:29 +0000 (0:00:00.141) 0:00:20.307 ********* 2025-05-14 02:22:29.615991 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:29.617251 | orchestrator | 2025-05-14 02:22:29.618408 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-14 02:22:29.621219 | orchestrator | Wednesday 14 May 2025 02:22:29 +0000 (0:00:00.332) 0:00:20.640 ********* 2025-05-14 02:22:29.759378 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:29.762108 | orchestrator | 2025-05-14 02:22:29.762335 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-14 02:22:29.762887 | orchestrator | Wednesday 14 May 2025 02:22:29 +0000 (0:00:00.142) 0:00:20.782 ********* 2025-05-14 02:22:29.914526 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:29.916254 | orchestrator | 2025-05-14 02:22:29.917378 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-14 02:22:29.918108 | orchestrator | Wednesday 14 May 2025 02:22:29 +0000 (0:00:00.156) 0:00:20.939 ********* 2025-05-14 02:22:30.065103 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:30.066115 | orchestrator | 2025-05-14 02:22:30.067424 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-14 02:22:30.068285 | orchestrator | Wednesday 14 May 2025 02:22:30 +0000 (0:00:00.150) 0:00:21.089 ********* 2025-05-14 02:22:30.190323 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:30.190861 | orchestrator | 2025-05-14 02:22:30.191997 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-14 02:22:30.192498 | orchestrator | Wednesday 14 May 2025 02:22:30 +0000 (0:00:00.126) 0:00:21.216 ********* 2025-05-14 02:22:30.343461 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:30.347204 | orchestrator | 2025-05-14 02:22:30.348269 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-14 02:22:30.349423 | orchestrator | Wednesday 14 May 2025 02:22:30 +0000 (0:00:00.150) 0:00:21.366 ********* 2025-05-14 02:22:30.490622 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:30.492031 | orchestrator | 2025-05-14 02:22:30.492477 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-14 02:22:30.495329 | orchestrator | Wednesday 14 May 2025 02:22:30 +0000 (0:00:00.148) 0:00:21.515 ********* 2025-05-14 02:22:30.638329 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:30.639228 | orchestrator | 2025-05-14 02:22:30.640966 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-14 02:22:30.642242 | orchestrator | Wednesday 14 May 2025 02:22:30 +0000 (0:00:00.147) 0:00:21.662 ********* 2025-05-14 02:22:30.757628 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:30.758596 | orchestrator | 2025-05-14 02:22:30.759189 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-14 02:22:30.759830 | orchestrator | Wednesday 14 May 2025 02:22:30 +0000 (0:00:00.119) 0:00:21.782 ********* 2025-05-14 02:22:30.899457 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:30.901151 | orchestrator | 2025-05-14 02:22:30.903143 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-14 02:22:30.904431 | orchestrator | Wednesday 14 May 2025 02:22:30 +0000 (0:00:00.141) 0:00:21.924 ********* 2025-05-14 02:22:31.090856 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:31.090956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:31.091468 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:31.092025 | orchestrator | 2025-05-14 02:22:31.092530 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-14 02:22:31.092985 | orchestrator | Wednesday 14 May 2025 02:22:31 +0000 (0:00:00.192) 0:00:22.116 ********* 2025-05-14 02:22:31.264467 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:31.266393 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:31.267279 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:31.270603 | orchestrator | 2025-05-14 02:22:31.270645 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-14 02:22:31.270667 | orchestrator | Wednesday 14 May 2025 02:22:31 +0000 (0:00:00.173) 0:00:22.289 ********* 2025-05-14 02:22:31.663039 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:31.664339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:31.666073 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:31.668570 | orchestrator | 2025-05-14 02:22:31.668683 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-14 02:22:31.671310 | orchestrator | Wednesday 14 May 2025 02:22:31 +0000 (0:00:00.397) 0:00:22.687 ********* 2025-05-14 02:22:31.861143 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:31.862136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:31.863486 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:31.864835 | orchestrator | 2025-05-14 02:22:31.865952 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-14 02:22:31.866671 | orchestrator | Wednesday 14 May 2025 02:22:31 +0000 (0:00:00.197) 0:00:22.884 ********* 2025-05-14 02:22:32.036552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:32.037147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:32.038326 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:32.039107 | orchestrator | 2025-05-14 02:22:32.040165 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-14 02:22:32.040868 | orchestrator | Wednesday 14 May 2025 02:22:32 +0000 (0:00:00.177) 0:00:23.061 ********* 2025-05-14 02:22:32.219325 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:32.220648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:32.221777 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:32.223330 | orchestrator | 2025-05-14 02:22:32.224552 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-14 02:22:32.225386 | orchestrator | Wednesday 14 May 2025 02:22:32 +0000 (0:00:00.182) 0:00:23.244 ********* 2025-05-14 02:22:32.403315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:32.403418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:32.403953 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:32.405245 | orchestrator | 2025-05-14 02:22:32.406167 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-14 02:22:32.407279 | orchestrator | Wednesday 14 May 2025 02:22:32 +0000 (0:00:00.178) 0:00:23.423 ********* 2025-05-14 02:22:32.582469 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:32.583963 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:32.588363 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:32.588391 | orchestrator | 2025-05-14 02:22:32.588404 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-14 02:22:32.589363 | orchestrator | Wednesday 14 May 2025 02:22:32 +0000 (0:00:00.184) 0:00:23.607 ********* 2025-05-14 02:22:33.120753 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:22:33.120880 | orchestrator | 2025-05-14 02:22:33.120907 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-14 02:22:33.120952 | orchestrator | Wednesday 14 May 2025 02:22:33 +0000 (0:00:00.532) 0:00:24.139 ********* 2025-05-14 02:22:33.626170 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:22:33.627317 | orchestrator | 2025-05-14 02:22:33.628142 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-14 02:22:33.629104 | orchestrator | Wednesday 14 May 2025 02:22:33 +0000 (0:00:00.509) 0:00:24.649 ********* 2025-05-14 02:22:33.779391 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:22:33.779577 | orchestrator | 2025-05-14 02:22:33.780224 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-14 02:22:33.781612 | orchestrator | Wednesday 14 May 2025 02:22:33 +0000 (0:00:00.156) 0:00:24.805 ********* 2025-05-14 02:22:33.985107 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'vg_name': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'}) 2025-05-14 02:22:33.985264 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'vg_name': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'}) 2025-05-14 02:22:33.990236 | orchestrator | 2025-05-14 02:22:33.991234 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-14 02:22:33.992018 | orchestrator | Wednesday 14 May 2025 02:22:33 +0000 (0:00:00.203) 0:00:25.008 ********* 2025-05-14 02:22:34.355689 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:34.356023 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:34.358534 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:34.362803 | orchestrator | 2025-05-14 02:22:34.363838 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-14 02:22:34.365039 | orchestrator | Wednesday 14 May 2025 02:22:34 +0000 (0:00:00.373) 0:00:25.381 ********* 2025-05-14 02:22:34.528120 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:34.528748 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:34.529747 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:34.530376 | orchestrator | 2025-05-14 02:22:34.534383 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-14 02:22:34.535644 | orchestrator | Wednesday 14 May 2025 02:22:34 +0000 (0:00:00.172) 0:00:25.554 ********* 2025-05-14 02:22:34.751130 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'})  2025-05-14 02:22:34.752370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'})  2025-05-14 02:22:34.753297 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:22:34.756668 | orchestrator | 2025-05-14 02:22:34.757473 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-14 02:22:34.758279 | orchestrator | Wednesday 14 May 2025 02:22:34 +0000 (0:00:00.222) 0:00:25.776 ********* 2025-05-14 02:22:35.626846 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:22:35.627860 | orchestrator |  "lvm_report": { 2025-05-14 02:22:35.629567 | orchestrator |  "lv": [ 2025-05-14 02:22:35.633053 | orchestrator |  { 2025-05-14 02:22:35.634104 | orchestrator |  "lv_name": "osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9", 2025-05-14 02:22:35.634268 | orchestrator |  "vg_name": "ceph-a0a91196-50f5-599a-8231-3d981ca1eca9" 2025-05-14 02:22:35.635536 | orchestrator |  }, 2025-05-14 02:22:35.636283 | orchestrator |  { 2025-05-14 02:22:35.637012 | orchestrator |  "lv_name": "osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d", 2025-05-14 02:22:35.638154 | orchestrator |  "vg_name": "ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d" 2025-05-14 02:22:35.639959 | orchestrator |  } 2025-05-14 02:22:35.640436 | orchestrator |  ], 2025-05-14 02:22:35.641249 | orchestrator |  "pv": [ 2025-05-14 02:22:35.642234 | orchestrator |  { 2025-05-14 02:22:35.642275 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-14 02:22:35.644645 | orchestrator |  "vg_name": "ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d" 2025-05-14 02:22:35.647480 | orchestrator |  }, 2025-05-14 02:22:35.648168 | orchestrator |  { 2025-05-14 02:22:35.648980 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-14 02:22:35.649779 | orchestrator |  "vg_name": "ceph-a0a91196-50f5-599a-8231-3d981ca1eca9" 2025-05-14 02:22:35.650064 | orchestrator |  } 2025-05-14 02:22:35.652865 | orchestrator |  ] 2025-05-14 02:22:35.653109 | orchestrator |  } 2025-05-14 02:22:35.654118 | orchestrator | } 2025-05-14 02:22:35.657172 | orchestrator | 2025-05-14 02:22:35.658161 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-14 02:22:35.659284 | orchestrator | 2025-05-14 02:22:35.659928 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:22:35.660717 | orchestrator | Wednesday 14 May 2025 02:22:35 +0000 (0:00:00.874) 0:00:26.651 ********* 2025-05-14 02:22:36.280178 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-14 02:22:36.280378 | orchestrator | 2025-05-14 02:22:36.287280 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:22:36.287334 | orchestrator | Wednesday 14 May 2025 02:22:36 +0000 (0:00:00.652) 0:00:27.303 ********* 2025-05-14 02:22:36.528836 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:36.529484 | orchestrator | 2025-05-14 02:22:36.530842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:36.532029 | orchestrator | Wednesday 14 May 2025 02:22:36 +0000 (0:00:00.250) 0:00:27.554 ********* 2025-05-14 02:22:37.013610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-14 02:22:37.014613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-14 02:22:37.017881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-14 02:22:37.018447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-14 02:22:37.018853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-14 02:22:37.019782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-14 02:22:37.020236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-14 02:22:37.020661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-14 02:22:37.021108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-14 02:22:37.021559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-14 02:22:37.022121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-14 02:22:37.022619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-14 02:22:37.023225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-14 02:22:37.023569 | orchestrator | 2025-05-14 02:22:37.024679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:37.025145 | orchestrator | Wednesday 14 May 2025 02:22:37 +0000 (0:00:00.482) 0:00:28.037 ********* 2025-05-14 02:22:37.212238 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:37.212448 | orchestrator | 2025-05-14 02:22:37.212813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:37.213329 | orchestrator | Wednesday 14 May 2025 02:22:37 +0000 (0:00:00.200) 0:00:28.238 ********* 2025-05-14 02:22:37.408982 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:37.409603 | orchestrator | 2025-05-14 02:22:37.410486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:37.411747 | orchestrator | Wednesday 14 May 2025 02:22:37 +0000 (0:00:00.195) 0:00:28.433 ********* 2025-05-14 02:22:37.613352 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:37.613451 | orchestrator | 2025-05-14 02:22:37.613465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:37.613479 | orchestrator | Wednesday 14 May 2025 02:22:37 +0000 (0:00:00.199) 0:00:28.633 ********* 2025-05-14 02:22:37.791748 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:37.792473 | orchestrator | 2025-05-14 02:22:37.794102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:37.796816 | orchestrator | Wednesday 14 May 2025 02:22:37 +0000 (0:00:00.183) 0:00:28.816 ********* 2025-05-14 02:22:37.998557 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:37.998651 | orchestrator | 2025-05-14 02:22:37.998666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:37.998678 | orchestrator | Wednesday 14 May 2025 02:22:37 +0000 (0:00:00.203) 0:00:29.019 ********* 2025-05-14 02:22:38.234877 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:38.235381 | orchestrator | 2025-05-14 02:22:38.236470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:38.237182 | orchestrator | Wednesday 14 May 2025 02:22:38 +0000 (0:00:00.238) 0:00:29.257 ********* 2025-05-14 02:22:38.452096 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:38.452941 | orchestrator | 2025-05-14 02:22:38.454100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:38.456741 | orchestrator | Wednesday 14 May 2025 02:22:38 +0000 (0:00:00.219) 0:00:29.477 ********* 2025-05-14 02:22:38.859417 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:38.860468 | orchestrator | 2025-05-14 02:22:38.861817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:38.866101 | orchestrator | Wednesday 14 May 2025 02:22:38 +0000 (0:00:00.407) 0:00:29.884 ********* 2025-05-14 02:22:39.308168 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b) 2025-05-14 02:22:39.308328 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b) 2025-05-14 02:22:39.310612 | orchestrator | 2025-05-14 02:22:39.310653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:39.314390 | orchestrator | Wednesday 14 May 2025 02:22:39 +0000 (0:00:00.448) 0:00:30.333 ********* 2025-05-14 02:22:39.760211 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2fe9822d-742a-4109-b2fd-4f62bd011e9b) 2025-05-14 02:22:39.760393 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2fe9822d-742a-4109-b2fd-4f62bd011e9b) 2025-05-14 02:22:39.761563 | orchestrator | 2025-05-14 02:22:39.762878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:39.763247 | orchestrator | Wednesday 14 May 2025 02:22:39 +0000 (0:00:00.451) 0:00:30.784 ********* 2025-05-14 02:22:40.234522 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4bf8951c-ead1-422f-8e98-563fd238f873) 2025-05-14 02:22:40.237897 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4bf8951c-ead1-422f-8e98-563fd238f873) 2025-05-14 02:22:40.238926 | orchestrator | 2025-05-14 02:22:40.240401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:40.242945 | orchestrator | Wednesday 14 May 2025 02:22:40 +0000 (0:00:00.473) 0:00:31.257 ********* 2025-05-14 02:22:40.709101 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9158ba9c-f661-457a-83a0-7301d2e715e9) 2025-05-14 02:22:40.709219 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9158ba9c-f661-457a-83a0-7301d2e715e9) 2025-05-14 02:22:40.711334 | orchestrator | 2025-05-14 02:22:40.713456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:40.713529 | orchestrator | Wednesday 14 May 2025 02:22:40 +0000 (0:00:00.475) 0:00:31.733 ********* 2025-05-14 02:22:41.058580 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:22:41.058681 | orchestrator | 2025-05-14 02:22:41.058824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:41.059341 | orchestrator | Wednesday 14 May 2025 02:22:41 +0000 (0:00:00.347) 0:00:32.081 ********* 2025-05-14 02:22:41.534099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-14 02:22:41.534201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-14 02:22:41.535122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-14 02:22:41.536287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-14 02:22:41.537108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-14 02:22:41.537434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-14 02:22:41.539511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-14 02:22:41.539585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-14 02:22:41.539970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-14 02:22:41.540041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-14 02:22:41.540355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-14 02:22:41.541191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-14 02:22:41.541531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-14 02:22:41.541801 | orchestrator | 2025-05-14 02:22:41.542340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:41.542822 | orchestrator | Wednesday 14 May 2025 02:22:41 +0000 (0:00:00.478) 0:00:32.559 ********* 2025-05-14 02:22:41.791547 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:41.791756 | orchestrator | 2025-05-14 02:22:41.791831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:41.792435 | orchestrator | Wednesday 14 May 2025 02:22:41 +0000 (0:00:00.255) 0:00:32.814 ********* 2025-05-14 02:22:42.002959 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:42.003496 | orchestrator | 2025-05-14 02:22:42.004657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:42.005289 | orchestrator | Wednesday 14 May 2025 02:22:41 +0000 (0:00:00.211) 0:00:33.026 ********* 2025-05-14 02:22:42.555993 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:42.556329 | orchestrator | 2025-05-14 02:22:42.558494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:42.560758 | orchestrator | Wednesday 14 May 2025 02:22:42 +0000 (0:00:00.554) 0:00:33.581 ********* 2025-05-14 02:22:42.779953 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:42.780365 | orchestrator | 2025-05-14 02:22:42.784178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:42.784224 | orchestrator | Wednesday 14 May 2025 02:22:42 +0000 (0:00:00.223) 0:00:33.804 ********* 2025-05-14 02:22:42.982589 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:42.983357 | orchestrator | 2025-05-14 02:22:42.984775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:42.985059 | orchestrator | Wednesday 14 May 2025 02:22:42 +0000 (0:00:00.200) 0:00:34.005 ********* 2025-05-14 02:22:43.184920 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:43.185081 | orchestrator | 2025-05-14 02:22:43.186276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:43.187152 | orchestrator | Wednesday 14 May 2025 02:22:43 +0000 (0:00:00.204) 0:00:34.209 ********* 2025-05-14 02:22:43.399188 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:43.399351 | orchestrator | 2025-05-14 02:22:43.399569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:43.400118 | orchestrator | Wednesday 14 May 2025 02:22:43 +0000 (0:00:00.215) 0:00:34.425 ********* 2025-05-14 02:22:43.597268 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:43.598268 | orchestrator | 2025-05-14 02:22:43.599305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:43.600535 | orchestrator | Wednesday 14 May 2025 02:22:43 +0000 (0:00:00.195) 0:00:34.620 ********* 2025-05-14 02:22:44.315406 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-14 02:22:44.315602 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-14 02:22:44.315753 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-14 02:22:44.316511 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-14 02:22:44.318106 | orchestrator | 2025-05-14 02:22:44.321361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:44.321642 | orchestrator | Wednesday 14 May 2025 02:22:44 +0000 (0:00:00.718) 0:00:35.338 ********* 2025-05-14 02:22:44.531090 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:44.533578 | orchestrator | 2025-05-14 02:22:44.534075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:44.535472 | orchestrator | Wednesday 14 May 2025 02:22:44 +0000 (0:00:00.214) 0:00:35.553 ********* 2025-05-14 02:22:44.741555 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:44.742100 | orchestrator | 2025-05-14 02:22:44.743938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:44.744012 | orchestrator | Wednesday 14 May 2025 02:22:44 +0000 (0:00:00.210) 0:00:35.764 ********* 2025-05-14 02:22:44.936475 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:44.936881 | orchestrator | 2025-05-14 02:22:44.939478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:44.940153 | orchestrator | Wednesday 14 May 2025 02:22:44 +0000 (0:00:00.197) 0:00:35.961 ********* 2025-05-14 02:22:45.564006 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:45.564111 | orchestrator | 2025-05-14 02:22:45.564219 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-14 02:22:45.565815 | orchestrator | Wednesday 14 May 2025 02:22:45 +0000 (0:00:00.625) 0:00:36.587 ********* 2025-05-14 02:22:45.723384 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:45.723480 | orchestrator | 2025-05-14 02:22:45.723963 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-14 02:22:45.724608 | orchestrator | Wednesday 14 May 2025 02:22:45 +0000 (0:00:00.160) 0:00:36.748 ********* 2025-05-14 02:22:45.934226 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea3c2360-3d2e-5360-8839-85b817b77bc3'}}) 2025-05-14 02:22:45.934401 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fecac30f-087c-5b0b-83ef-f9d2b642a995'}}) 2025-05-14 02:22:45.935396 | orchestrator | 2025-05-14 02:22:45.936321 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-14 02:22:45.937081 | orchestrator | Wednesday 14 May 2025 02:22:45 +0000 (0:00:00.209) 0:00:36.958 ********* 2025-05-14 02:22:47.826278 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'}) 2025-05-14 02:22:47.826831 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'}) 2025-05-14 02:22:47.828477 | orchestrator | 2025-05-14 02:22:47.829379 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-14 02:22:47.829927 | orchestrator | Wednesday 14 May 2025 02:22:47 +0000 (0:00:01.891) 0:00:38.850 ********* 2025-05-14 02:22:47.993527 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:47.994479 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:47.994989 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:47.997201 | orchestrator | 2025-05-14 02:22:47.997522 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-14 02:22:47.997795 | orchestrator | Wednesday 14 May 2025 02:22:47 +0000 (0:00:00.168) 0:00:39.018 ********* 2025-05-14 02:22:49.255127 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'}) 2025-05-14 02:22:49.255311 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'}) 2025-05-14 02:22:49.255862 | orchestrator | 2025-05-14 02:22:49.256462 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-14 02:22:49.257245 | orchestrator | Wednesday 14 May 2025 02:22:49 +0000 (0:00:01.260) 0:00:40.278 ********* 2025-05-14 02:22:49.425306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:49.425855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:49.429131 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:49.429175 | orchestrator | 2025-05-14 02:22:49.429189 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-14 02:22:49.429417 | orchestrator | Wednesday 14 May 2025 02:22:49 +0000 (0:00:00.170) 0:00:40.449 ********* 2025-05-14 02:22:49.574452 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:49.575425 | orchestrator | 2025-05-14 02:22:49.576046 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-14 02:22:49.579156 | orchestrator | Wednesday 14 May 2025 02:22:49 +0000 (0:00:00.147) 0:00:40.597 ********* 2025-05-14 02:22:49.724465 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:49.725136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:49.725733 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:49.727250 | orchestrator | 2025-05-14 02:22:49.727795 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-14 02:22:49.727828 | orchestrator | Wednesday 14 May 2025 02:22:49 +0000 (0:00:00.151) 0:00:40.748 ********* 2025-05-14 02:22:50.042771 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:50.043464 | orchestrator | 2025-05-14 02:22:50.045912 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-14 02:22:50.046674 | orchestrator | Wednesday 14 May 2025 02:22:50 +0000 (0:00:00.316) 0:00:41.065 ********* 2025-05-14 02:22:50.199990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:50.200081 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:50.200915 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:50.201137 | orchestrator | 2025-05-14 02:22:50.202007 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-14 02:22:50.202256 | orchestrator | Wednesday 14 May 2025 02:22:50 +0000 (0:00:00.159) 0:00:41.225 ********* 2025-05-14 02:22:50.337170 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:50.337821 | orchestrator | 2025-05-14 02:22:50.338350 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-14 02:22:50.339076 | orchestrator | Wednesday 14 May 2025 02:22:50 +0000 (0:00:00.136) 0:00:41.362 ********* 2025-05-14 02:22:50.503614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:50.504181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:50.505619 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:50.506920 | orchestrator | 2025-05-14 02:22:50.507135 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-14 02:22:50.507988 | orchestrator | Wednesday 14 May 2025 02:22:50 +0000 (0:00:00.166) 0:00:41.528 ********* 2025-05-14 02:22:50.648037 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:50.648976 | orchestrator | 2025-05-14 02:22:50.649925 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-14 02:22:50.652843 | orchestrator | Wednesday 14 May 2025 02:22:50 +0000 (0:00:00.142) 0:00:41.671 ********* 2025-05-14 02:22:50.813852 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:50.814079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:50.815233 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:50.816760 | orchestrator | 2025-05-14 02:22:50.818216 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-14 02:22:50.818888 | orchestrator | Wednesday 14 May 2025 02:22:50 +0000 (0:00:00.166) 0:00:41.838 ********* 2025-05-14 02:22:50.987751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:50.988487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:50.988936 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:50.991377 | orchestrator | 2025-05-14 02:22:50.991403 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-14 02:22:50.991416 | orchestrator | Wednesday 14 May 2025 02:22:50 +0000 (0:00:00.173) 0:00:42.011 ********* 2025-05-14 02:22:51.151997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:51.153296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:51.154302 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:51.156805 | orchestrator | 2025-05-14 02:22:51.156831 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-14 02:22:51.156844 | orchestrator | Wednesday 14 May 2025 02:22:51 +0000 (0:00:00.165) 0:00:42.177 ********* 2025-05-14 02:22:51.285798 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:51.286742 | orchestrator | 2025-05-14 02:22:51.286991 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-14 02:22:51.288230 | orchestrator | Wednesday 14 May 2025 02:22:51 +0000 (0:00:00.133) 0:00:42.311 ********* 2025-05-14 02:22:51.438765 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:51.439327 | orchestrator | 2025-05-14 02:22:51.440419 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-14 02:22:51.442169 | orchestrator | Wednesday 14 May 2025 02:22:51 +0000 (0:00:00.152) 0:00:42.463 ********* 2025-05-14 02:22:51.594159 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:51.595237 | orchestrator | 2025-05-14 02:22:51.595904 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-14 02:22:51.596891 | orchestrator | Wednesday 14 May 2025 02:22:51 +0000 (0:00:00.155) 0:00:42.618 ********* 2025-05-14 02:22:51.748692 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:22:51.748922 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-14 02:22:51.749469 | orchestrator | } 2025-05-14 02:22:51.750187 | orchestrator | 2025-05-14 02:22:51.750755 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-14 02:22:51.751311 | orchestrator | Wednesday 14 May 2025 02:22:51 +0000 (0:00:00.154) 0:00:42.773 ********* 2025-05-14 02:22:52.091861 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:22:52.092069 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-14 02:22:52.092873 | orchestrator | } 2025-05-14 02:22:52.093661 | orchestrator | 2025-05-14 02:22:52.094098 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-14 02:22:52.094757 | orchestrator | Wednesday 14 May 2025 02:22:52 +0000 (0:00:00.343) 0:00:43.116 ********* 2025-05-14 02:22:52.238551 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:22:52.239208 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-14 02:22:52.239856 | orchestrator | } 2025-05-14 02:22:52.240354 | orchestrator | 2025-05-14 02:22:52.240728 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-14 02:22:52.241441 | orchestrator | Wednesday 14 May 2025 02:22:52 +0000 (0:00:00.148) 0:00:43.264 ********* 2025-05-14 02:22:52.770121 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:52.770418 | orchestrator | 2025-05-14 02:22:52.771869 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-14 02:22:52.772522 | orchestrator | Wednesday 14 May 2025 02:22:52 +0000 (0:00:00.528) 0:00:43.793 ********* 2025-05-14 02:22:53.267142 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:53.267607 | orchestrator | 2025-05-14 02:22:53.270116 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-14 02:22:53.270460 | orchestrator | Wednesday 14 May 2025 02:22:53 +0000 (0:00:00.497) 0:00:44.290 ********* 2025-05-14 02:22:53.818557 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:53.821171 | orchestrator | 2025-05-14 02:22:53.821207 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-14 02:22:53.821247 | orchestrator | Wednesday 14 May 2025 02:22:53 +0000 (0:00:00.550) 0:00:44.840 ********* 2025-05-14 02:22:53.978792 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:53.979260 | orchestrator | 2025-05-14 02:22:53.981287 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-14 02:22:53.981319 | orchestrator | Wednesday 14 May 2025 02:22:53 +0000 (0:00:00.162) 0:00:45.003 ********* 2025-05-14 02:22:54.103175 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:54.104760 | orchestrator | 2025-05-14 02:22:54.106598 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-14 02:22:54.106627 | orchestrator | Wednesday 14 May 2025 02:22:54 +0000 (0:00:00.123) 0:00:45.127 ********* 2025-05-14 02:22:54.206326 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:54.207151 | orchestrator | 2025-05-14 02:22:54.207533 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-14 02:22:54.208197 | orchestrator | Wednesday 14 May 2025 02:22:54 +0000 (0:00:00.104) 0:00:45.232 ********* 2025-05-14 02:22:54.355668 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:22:54.357135 | orchestrator |  "vgs_report": { 2025-05-14 02:22:54.358556 | orchestrator |  "vg": [] 2025-05-14 02:22:54.359492 | orchestrator |  } 2025-05-14 02:22:54.360348 | orchestrator | } 2025-05-14 02:22:54.361021 | orchestrator | 2025-05-14 02:22:54.361656 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-14 02:22:54.362454 | orchestrator | Wednesday 14 May 2025 02:22:54 +0000 (0:00:00.148) 0:00:45.381 ********* 2025-05-14 02:22:54.505119 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:54.505233 | orchestrator | 2025-05-14 02:22:54.506268 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-14 02:22:54.506663 | orchestrator | Wednesday 14 May 2025 02:22:54 +0000 (0:00:00.148) 0:00:45.529 ********* 2025-05-14 02:22:54.848834 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:54.849109 | orchestrator | 2025-05-14 02:22:54.849666 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-14 02:22:54.850621 | orchestrator | Wednesday 14 May 2025 02:22:54 +0000 (0:00:00.343) 0:00:45.872 ********* 2025-05-14 02:22:54.980969 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:54.981915 | orchestrator | 2025-05-14 02:22:54.982918 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-14 02:22:54.985564 | orchestrator | Wednesday 14 May 2025 02:22:54 +0000 (0:00:00.132) 0:00:46.005 ********* 2025-05-14 02:22:55.123462 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:55.123606 | orchestrator | 2025-05-14 02:22:55.124340 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-14 02:22:55.125097 | orchestrator | Wednesday 14 May 2025 02:22:55 +0000 (0:00:00.141) 0:00:46.147 ********* 2025-05-14 02:22:55.270604 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:55.270696 | orchestrator | 2025-05-14 02:22:55.273737 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-14 02:22:55.274869 | orchestrator | Wednesday 14 May 2025 02:22:55 +0000 (0:00:00.147) 0:00:46.294 ********* 2025-05-14 02:22:55.411258 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:55.411490 | orchestrator | 2025-05-14 02:22:55.412293 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-14 02:22:55.413497 | orchestrator | Wednesday 14 May 2025 02:22:55 +0000 (0:00:00.141) 0:00:46.436 ********* 2025-05-14 02:22:55.555564 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:55.556161 | orchestrator | 2025-05-14 02:22:55.557225 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-14 02:22:55.558517 | orchestrator | Wednesday 14 May 2025 02:22:55 +0000 (0:00:00.144) 0:00:46.580 ********* 2025-05-14 02:22:55.711125 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:55.711953 | orchestrator | 2025-05-14 02:22:55.712547 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-14 02:22:55.712989 | orchestrator | Wednesday 14 May 2025 02:22:55 +0000 (0:00:00.155) 0:00:46.736 ********* 2025-05-14 02:22:55.861648 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:55.862178 | orchestrator | 2025-05-14 02:22:55.863148 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-14 02:22:55.863642 | orchestrator | Wednesday 14 May 2025 02:22:55 +0000 (0:00:00.149) 0:00:46.885 ********* 2025-05-14 02:22:56.012680 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:56.015892 | orchestrator | 2025-05-14 02:22:56.016200 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-14 02:22:56.016651 | orchestrator | Wednesday 14 May 2025 02:22:56 +0000 (0:00:00.149) 0:00:47.034 ********* 2025-05-14 02:22:56.150079 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:56.152925 | orchestrator | 2025-05-14 02:22:56.152959 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-14 02:22:56.152994 | orchestrator | Wednesday 14 May 2025 02:22:56 +0000 (0:00:00.137) 0:00:47.172 ********* 2025-05-14 02:22:56.312353 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:56.313293 | orchestrator | 2025-05-14 02:22:56.314400 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-14 02:22:56.315860 | orchestrator | Wednesday 14 May 2025 02:22:56 +0000 (0:00:00.165) 0:00:47.337 ********* 2025-05-14 02:22:56.464612 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:56.464820 | orchestrator | 2025-05-14 02:22:56.465769 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-14 02:22:56.466849 | orchestrator | Wednesday 14 May 2025 02:22:56 +0000 (0:00:00.152) 0:00:47.489 ********* 2025-05-14 02:22:56.812035 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:56.812404 | orchestrator | 2025-05-14 02:22:56.812749 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-14 02:22:56.813722 | orchestrator | Wednesday 14 May 2025 02:22:56 +0000 (0:00:00.346) 0:00:47.836 ********* 2025-05-14 02:22:56.991764 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:56.991839 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:56.991927 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:56.991937 | orchestrator | 2025-05-14 02:22:56.992102 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-14 02:22:56.992194 | orchestrator | Wednesday 14 May 2025 02:22:56 +0000 (0:00:00.175) 0:00:48.012 ********* 2025-05-14 02:22:57.183558 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:57.185106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:57.185130 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:57.185918 | orchestrator | 2025-05-14 02:22:57.186321 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-14 02:22:57.186878 | orchestrator | Wednesday 14 May 2025 02:22:57 +0000 (0:00:00.191) 0:00:48.204 ********* 2025-05-14 02:22:57.374788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:57.374952 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:57.376886 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:57.378164 | orchestrator | 2025-05-14 02:22:57.379794 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-14 02:22:57.380806 | orchestrator | Wednesday 14 May 2025 02:22:57 +0000 (0:00:00.194) 0:00:48.399 ********* 2025-05-14 02:22:57.529604 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:57.529886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:57.530610 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:57.531426 | orchestrator | 2025-05-14 02:22:57.532244 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-14 02:22:57.532965 | orchestrator | Wednesday 14 May 2025 02:22:57 +0000 (0:00:00.155) 0:00:48.554 ********* 2025-05-14 02:22:57.712242 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:57.712507 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:57.714666 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:57.714728 | orchestrator | 2025-05-14 02:22:57.715152 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-14 02:22:57.715948 | orchestrator | Wednesday 14 May 2025 02:22:57 +0000 (0:00:00.183) 0:00:48.737 ********* 2025-05-14 02:22:57.850131 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:57.850396 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:57.852009 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:57.852370 | orchestrator | 2025-05-14 02:22:57.853111 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-14 02:22:57.853839 | orchestrator | Wednesday 14 May 2025 02:22:57 +0000 (0:00:00.137) 0:00:48.875 ********* 2025-05-14 02:22:58.010094 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:58.010525 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:58.011593 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:58.012222 | orchestrator | 2025-05-14 02:22:58.013062 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-14 02:22:58.014075 | orchestrator | Wednesday 14 May 2025 02:22:58 +0000 (0:00:00.160) 0:00:49.035 ********* 2025-05-14 02:22:58.167465 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:58.168133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:58.169098 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:58.169637 | orchestrator | 2025-05-14 02:22:58.170081 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-14 02:22:58.170515 | orchestrator | Wednesday 14 May 2025 02:22:58 +0000 (0:00:00.156) 0:00:49.191 ********* 2025-05-14 02:22:58.660797 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:58.661142 | orchestrator | 2025-05-14 02:22:58.663126 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-14 02:22:58.664658 | orchestrator | Wednesday 14 May 2025 02:22:58 +0000 (0:00:00.494) 0:00:49.686 ********* 2025-05-14 02:22:59.152660 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:59.153137 | orchestrator | 2025-05-14 02:22:59.153771 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-14 02:22:59.154781 | orchestrator | Wednesday 14 May 2025 02:22:59 +0000 (0:00:00.491) 0:00:50.177 ********* 2025-05-14 02:22:59.423130 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:59.425746 | orchestrator | 2025-05-14 02:22:59.425796 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-14 02:22:59.426066 | orchestrator | Wednesday 14 May 2025 02:22:59 +0000 (0:00:00.271) 0:00:50.448 ********* 2025-05-14 02:22:59.601749 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'vg_name': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'}) 2025-05-14 02:22:59.602001 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'vg_name': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'}) 2025-05-14 02:22:59.602881 | orchestrator | 2025-05-14 02:22:59.603380 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-14 02:22:59.603782 | orchestrator | Wednesday 14 May 2025 02:22:59 +0000 (0:00:00.178) 0:00:50.627 ********* 2025-05-14 02:22:59.752777 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:59.753065 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:59.755500 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:59.755534 | orchestrator | 2025-05-14 02:22:59.755547 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-14 02:22:59.755560 | orchestrator | Wednesday 14 May 2025 02:22:59 +0000 (0:00:00.150) 0:00:50.778 ********* 2025-05-14 02:22:59.905859 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:22:59.906144 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:22:59.906615 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:59.907991 | orchestrator | 2025-05-14 02:22:59.908070 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-14 02:22:59.908901 | orchestrator | Wednesday 14 May 2025 02:22:59 +0000 (0:00:00.152) 0:00:50.931 ********* 2025-05-14 02:23:00.066459 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'})  2025-05-14 02:23:00.067103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'})  2025-05-14 02:23:00.067631 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:00.068612 | orchestrator | 2025-05-14 02:23:00.068805 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-14 02:23:00.069371 | orchestrator | Wednesday 14 May 2025 02:23:00 +0000 (0:00:00.160) 0:00:51.092 ********* 2025-05-14 02:23:00.811610 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:23:00.814327 | orchestrator |  "lvm_report": { 2025-05-14 02:23:00.814729 | orchestrator |  "lv": [ 2025-05-14 02:23:00.815136 | orchestrator |  { 2025-05-14 02:23:00.815381 | orchestrator |  "lv_name": "osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3", 2025-05-14 02:23:00.815783 | orchestrator |  "vg_name": "ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3" 2025-05-14 02:23:00.815959 | orchestrator |  }, 2025-05-14 02:23:00.816389 | orchestrator |  { 2025-05-14 02:23:00.816916 | orchestrator |  "lv_name": "osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995", 2025-05-14 02:23:00.817155 | orchestrator |  "vg_name": "ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995" 2025-05-14 02:23:00.817450 | orchestrator |  } 2025-05-14 02:23:00.818139 | orchestrator |  ], 2025-05-14 02:23:00.819312 | orchestrator |  "pv": [ 2025-05-14 02:23:00.819463 | orchestrator |  { 2025-05-14 02:23:00.819781 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-14 02:23:00.820456 | orchestrator |  "vg_name": "ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3" 2025-05-14 02:23:00.820762 | orchestrator |  }, 2025-05-14 02:23:00.821101 | orchestrator |  { 2025-05-14 02:23:00.821694 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-14 02:23:00.822221 | orchestrator |  "vg_name": "ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995" 2025-05-14 02:23:00.822528 | orchestrator |  } 2025-05-14 02:23:00.822548 | orchestrator |  ] 2025-05-14 02:23:00.822804 | orchestrator |  } 2025-05-14 02:23:00.823419 | orchestrator | } 2025-05-14 02:23:00.823756 | orchestrator | 2025-05-14 02:23:00.824684 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-14 02:23:00.824798 | orchestrator | 2025-05-14 02:23:00.825038 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:23:00.826277 | orchestrator | Wednesday 14 May 2025 02:23:00 +0000 (0:00:00.744) 0:00:51.836 ********* 2025-05-14 02:23:01.031624 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-14 02:23:01.033519 | orchestrator | 2025-05-14 02:23:01.035188 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:23:01.035921 | orchestrator | Wednesday 14 May 2025 02:23:01 +0000 (0:00:00.220) 0:00:52.057 ********* 2025-05-14 02:23:01.232575 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:01.232676 | orchestrator | 2025-05-14 02:23:01.232951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:01.233250 | orchestrator | Wednesday 14 May 2025 02:23:01 +0000 (0:00:00.198) 0:00:52.256 ********* 2025-05-14 02:23:01.650576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-14 02:23:01.650882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-14 02:23:01.652075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-14 02:23:01.652167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-14 02:23:01.653030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-14 02:23:01.653844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-14 02:23:01.654485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-14 02:23:01.655013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-14 02:23:01.655451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-14 02:23:01.656613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-14 02:23:01.656669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-14 02:23:01.656976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-14 02:23:01.657640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-14 02:23:01.657715 | orchestrator | 2025-05-14 02:23:01.658170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:01.658594 | orchestrator | Wednesday 14 May 2025 02:23:01 +0000 (0:00:00.419) 0:00:52.675 ********* 2025-05-14 02:23:01.841104 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:01.841293 | orchestrator | 2025-05-14 02:23:01.841770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:01.842208 | orchestrator | Wednesday 14 May 2025 02:23:01 +0000 (0:00:00.190) 0:00:52.866 ********* 2025-05-14 02:23:02.032217 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:02.032933 | orchestrator | 2025-05-14 02:23:02.033683 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:02.034392 | orchestrator | Wednesday 14 May 2025 02:23:02 +0000 (0:00:00.190) 0:00:53.057 ********* 2025-05-14 02:23:02.231477 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:02.231787 | orchestrator | 2025-05-14 02:23:02.232112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:02.232640 | orchestrator | Wednesday 14 May 2025 02:23:02 +0000 (0:00:00.198) 0:00:53.255 ********* 2025-05-14 02:23:02.420933 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:02.421558 | orchestrator | 2025-05-14 02:23:02.422159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:02.422751 | orchestrator | Wednesday 14 May 2025 02:23:02 +0000 (0:00:00.190) 0:00:53.446 ********* 2025-05-14 02:23:02.616368 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:02.616460 | orchestrator | 2025-05-14 02:23:02.617515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:02.618007 | orchestrator | Wednesday 14 May 2025 02:23:02 +0000 (0:00:00.195) 0:00:53.641 ********* 2025-05-14 02:23:03.125872 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:03.125972 | orchestrator | 2025-05-14 02:23:03.126186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:03.126643 | orchestrator | Wednesday 14 May 2025 02:23:03 +0000 (0:00:00.509) 0:00:54.150 ********* 2025-05-14 02:23:03.373773 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:03.373865 | orchestrator | 2025-05-14 02:23:03.374375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:03.375073 | orchestrator | Wednesday 14 May 2025 02:23:03 +0000 (0:00:00.245) 0:00:54.396 ********* 2025-05-14 02:23:03.572841 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:03.573491 | orchestrator | 2025-05-14 02:23:03.573891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:03.575205 | orchestrator | Wednesday 14 May 2025 02:23:03 +0000 (0:00:00.202) 0:00:54.598 ********* 2025-05-14 02:23:03.975035 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53) 2025-05-14 02:23:03.975625 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53) 2025-05-14 02:23:03.976546 | orchestrator | 2025-05-14 02:23:03.977512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:03.978774 | orchestrator | Wednesday 14 May 2025 02:23:03 +0000 (0:00:00.400) 0:00:54.999 ********* 2025-05-14 02:23:04.395630 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7d716f79-cf1d-4cd5-9251-d30dd616fe8c) 2025-05-14 02:23:04.397383 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7d716f79-cf1d-4cd5-9251-d30dd616fe8c) 2025-05-14 02:23:04.397415 | orchestrator | 2025-05-14 02:23:04.397429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:04.397633 | orchestrator | Wednesday 14 May 2025 02:23:04 +0000 (0:00:00.419) 0:00:55.419 ********* 2025-05-14 02:23:04.844902 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_276d5307-5ea7-4279-8794-03223ea8507b) 2025-05-14 02:23:04.845632 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_276d5307-5ea7-4279-8794-03223ea8507b) 2025-05-14 02:23:04.846659 | orchestrator | 2025-05-14 02:23:04.847178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:04.849889 | orchestrator | Wednesday 14 May 2025 02:23:04 +0000 (0:00:00.450) 0:00:55.870 ********* 2025-05-14 02:23:05.290874 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_07a08b1a-3bd9-437e-a737-9a0e3fc440bf) 2025-05-14 02:23:05.292577 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_07a08b1a-3bd9-437e-a737-9a0e3fc440bf) 2025-05-14 02:23:05.293390 | orchestrator | 2025-05-14 02:23:05.294582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:05.295735 | orchestrator | Wednesday 14 May 2025 02:23:05 +0000 (0:00:00.445) 0:00:56.315 ********* 2025-05-14 02:23:05.619937 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:23:05.620622 | orchestrator | 2025-05-14 02:23:05.621765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:05.622396 | orchestrator | Wednesday 14 May 2025 02:23:05 +0000 (0:00:00.329) 0:00:56.645 ********* 2025-05-14 02:23:06.060359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-14 02:23:06.060517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-14 02:23:06.060959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-14 02:23:06.061448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-14 02:23:06.062346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-14 02:23:06.062564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-14 02:23:06.063309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-14 02:23:06.063945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-14 02:23:06.065897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-14 02:23:06.066107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-14 02:23:06.067394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-14 02:23:06.067927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-14 02:23:06.068370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-14 02:23:06.069061 | orchestrator | 2025-05-14 02:23:06.069440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:06.069848 | orchestrator | Wednesday 14 May 2025 02:23:06 +0000 (0:00:00.441) 0:00:57.086 ********* 2025-05-14 02:23:06.517173 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:06.517972 | orchestrator | 2025-05-14 02:23:06.518231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:06.518893 | orchestrator | Wednesday 14 May 2025 02:23:06 +0000 (0:00:00.454) 0:00:57.541 ********* 2025-05-14 02:23:06.706430 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:06.706918 | orchestrator | 2025-05-14 02:23:06.708621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:06.708647 | orchestrator | Wednesday 14 May 2025 02:23:06 +0000 (0:00:00.190) 0:00:57.731 ********* 2025-05-14 02:23:06.894133 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:06.894912 | orchestrator | 2025-05-14 02:23:06.895881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:06.896427 | orchestrator | Wednesday 14 May 2025 02:23:06 +0000 (0:00:00.188) 0:00:57.920 ********* 2025-05-14 02:23:07.089195 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:07.089361 | orchestrator | 2025-05-14 02:23:07.089448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:07.089877 | orchestrator | Wednesday 14 May 2025 02:23:07 +0000 (0:00:00.194) 0:00:58.115 ********* 2025-05-14 02:23:07.264584 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:07.265148 | orchestrator | 2025-05-14 02:23:07.265928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:07.269177 | orchestrator | Wednesday 14 May 2025 02:23:07 +0000 (0:00:00.174) 0:00:58.290 ********* 2025-05-14 02:23:07.450359 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:07.450453 | orchestrator | 2025-05-14 02:23:07.450850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:07.451861 | orchestrator | Wednesday 14 May 2025 02:23:07 +0000 (0:00:00.186) 0:00:58.476 ********* 2025-05-14 02:23:07.632812 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:07.632997 | orchestrator | 2025-05-14 02:23:07.633638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:07.634378 | orchestrator | Wednesday 14 May 2025 02:23:07 +0000 (0:00:00.182) 0:00:58.658 ********* 2025-05-14 02:23:07.819394 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:07.819486 | orchestrator | 2025-05-14 02:23:07.819499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:07.819841 | orchestrator | Wednesday 14 May 2025 02:23:07 +0000 (0:00:00.186) 0:00:58.845 ********* 2025-05-14 02:23:08.589364 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-14 02:23:08.590288 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-14 02:23:08.590874 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-14 02:23:08.591794 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-14 02:23:08.592261 | orchestrator | 2025-05-14 02:23:08.593109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:08.593315 | orchestrator | Wednesday 14 May 2025 02:23:08 +0000 (0:00:00.768) 0:00:59.613 ********* 2025-05-14 02:23:08.777082 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:08.777375 | orchestrator | 2025-05-14 02:23:08.777825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:08.779465 | orchestrator | Wednesday 14 May 2025 02:23:08 +0000 (0:00:00.188) 0:00:59.801 ********* 2025-05-14 02:23:09.417287 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:09.417699 | orchestrator | 2025-05-14 02:23:09.418346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:09.419226 | orchestrator | Wednesday 14 May 2025 02:23:09 +0000 (0:00:00.639) 0:01:00.441 ********* 2025-05-14 02:23:09.625917 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:09.626448 | orchestrator | 2025-05-14 02:23:09.627130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:09.627788 | orchestrator | Wednesday 14 May 2025 02:23:09 +0000 (0:00:00.210) 0:01:00.651 ********* 2025-05-14 02:23:09.821768 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:09.822373 | orchestrator | 2025-05-14 02:23:09.825422 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-14 02:23:09.826106 | orchestrator | Wednesday 14 May 2025 02:23:09 +0000 (0:00:00.193) 0:01:00.845 ********* 2025-05-14 02:23:09.973410 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:09.974337 | orchestrator | 2025-05-14 02:23:09.975064 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-14 02:23:09.976289 | orchestrator | Wednesday 14 May 2025 02:23:09 +0000 (0:00:00.153) 0:01:00.998 ********* 2025-05-14 02:23:10.175076 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '03d77871-dede-5752-b4dd-afb6f86d8bca'}}) 2025-05-14 02:23:10.175383 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0c7e27ae-f126-51b5-99e7-7e9908cad598'}}) 2025-05-14 02:23:10.176171 | orchestrator | 2025-05-14 02:23:10.176778 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-14 02:23:10.177309 | orchestrator | Wednesday 14 May 2025 02:23:10 +0000 (0:00:00.202) 0:01:01.200 ********* 2025-05-14 02:23:11.905639 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'}) 2025-05-14 02:23:11.906880 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'}) 2025-05-14 02:23:11.907357 | orchestrator | 2025-05-14 02:23:11.909436 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-14 02:23:11.909615 | orchestrator | Wednesday 14 May 2025 02:23:11 +0000 (0:00:01.726) 0:01:02.927 ********* 2025-05-14 02:23:12.063856 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:12.064375 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:12.065266 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:12.066116 | orchestrator | 2025-05-14 02:23:12.066919 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-14 02:23:12.067392 | orchestrator | Wednesday 14 May 2025 02:23:12 +0000 (0:00:00.159) 0:01:03.086 ********* 2025-05-14 02:23:13.384886 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'}) 2025-05-14 02:23:13.385055 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'}) 2025-05-14 02:23:13.385975 | orchestrator | 2025-05-14 02:23:13.388328 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-14 02:23:13.388933 | orchestrator | Wednesday 14 May 2025 02:23:13 +0000 (0:00:01.320) 0:01:04.407 ********* 2025-05-14 02:23:13.558612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:13.559989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:13.561038 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:13.561925 | orchestrator | 2025-05-14 02:23:13.562890 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-14 02:23:13.563765 | orchestrator | Wednesday 14 May 2025 02:23:13 +0000 (0:00:00.175) 0:01:04.583 ********* 2025-05-14 02:23:13.885666 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:13.886117 | orchestrator | 2025-05-14 02:23:13.887008 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-14 02:23:13.887765 | orchestrator | Wednesday 14 May 2025 02:23:13 +0000 (0:00:00.326) 0:01:04.910 ********* 2025-05-14 02:23:14.089656 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:14.092314 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:14.096006 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:14.096040 | orchestrator | 2025-05-14 02:23:14.097464 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-14 02:23:14.098395 | orchestrator | Wednesday 14 May 2025 02:23:14 +0000 (0:00:00.199) 0:01:05.109 ********* 2025-05-14 02:23:14.238679 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:14.240070 | orchestrator | 2025-05-14 02:23:14.240357 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-14 02:23:14.241057 | orchestrator | Wednesday 14 May 2025 02:23:14 +0000 (0:00:00.151) 0:01:05.261 ********* 2025-05-14 02:23:14.395272 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:14.396124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:14.400589 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:14.400923 | orchestrator | 2025-05-14 02:23:14.401798 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-14 02:23:14.402317 | orchestrator | Wednesday 14 May 2025 02:23:14 +0000 (0:00:00.156) 0:01:05.417 ********* 2025-05-14 02:23:14.551342 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:14.551964 | orchestrator | 2025-05-14 02:23:14.553346 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-14 02:23:14.554138 | orchestrator | Wednesday 14 May 2025 02:23:14 +0000 (0:00:00.158) 0:01:05.576 ********* 2025-05-14 02:23:14.721442 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:14.722330 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:14.723483 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:14.726376 | orchestrator | 2025-05-14 02:23:14.726405 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-14 02:23:14.726418 | orchestrator | Wednesday 14 May 2025 02:23:14 +0000 (0:00:00.170) 0:01:05.746 ********* 2025-05-14 02:23:14.859383 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:14.860284 | orchestrator | 2025-05-14 02:23:14.860860 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-14 02:23:14.861818 | orchestrator | Wednesday 14 May 2025 02:23:14 +0000 (0:00:00.138) 0:01:05.884 ********* 2025-05-14 02:23:15.022897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:15.025049 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:15.027369 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:15.027412 | orchestrator | 2025-05-14 02:23:15.027426 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-14 02:23:15.027695 | orchestrator | Wednesday 14 May 2025 02:23:15 +0000 (0:00:00.163) 0:01:06.048 ********* 2025-05-14 02:23:15.194096 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:15.194345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:15.194839 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:15.195849 | orchestrator | 2025-05-14 02:23:15.196273 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-14 02:23:15.196913 | orchestrator | Wednesday 14 May 2025 02:23:15 +0000 (0:00:00.171) 0:01:06.219 ********* 2025-05-14 02:23:15.377240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:15.377591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:15.378792 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:15.379697 | orchestrator | 2025-05-14 02:23:15.379972 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-14 02:23:15.380853 | orchestrator | Wednesday 14 May 2025 02:23:15 +0000 (0:00:00.183) 0:01:06.403 ********* 2025-05-14 02:23:15.492432 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:15.493580 | orchestrator | 2025-05-14 02:23:15.494150 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-14 02:23:15.495534 | orchestrator | Wednesday 14 May 2025 02:23:15 +0000 (0:00:00.115) 0:01:06.518 ********* 2025-05-14 02:23:15.622554 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:15.623651 | orchestrator | 2025-05-14 02:23:15.625130 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-14 02:23:15.626236 | orchestrator | Wednesday 14 May 2025 02:23:15 +0000 (0:00:00.129) 0:01:06.647 ********* 2025-05-14 02:23:15.961323 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:15.961484 | orchestrator | 2025-05-14 02:23:15.963295 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-14 02:23:15.963812 | orchestrator | Wednesday 14 May 2025 02:23:15 +0000 (0:00:00.338) 0:01:06.986 ********* 2025-05-14 02:23:16.128242 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:23:16.128951 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-14 02:23:16.129576 | orchestrator | } 2025-05-14 02:23:16.130516 | orchestrator | 2025-05-14 02:23:16.131606 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-14 02:23:16.132371 | orchestrator | Wednesday 14 May 2025 02:23:16 +0000 (0:00:00.165) 0:01:07.152 ********* 2025-05-14 02:23:16.265599 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:23:16.266805 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-14 02:23:16.267116 | orchestrator | } 2025-05-14 02:23:16.268208 | orchestrator | 2025-05-14 02:23:16.269493 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-14 02:23:16.270098 | orchestrator | Wednesday 14 May 2025 02:23:16 +0000 (0:00:00.138) 0:01:07.290 ********* 2025-05-14 02:23:16.410683 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:23:16.412206 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-14 02:23:16.413114 | orchestrator | } 2025-05-14 02:23:16.414845 | orchestrator | 2025-05-14 02:23:16.415651 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-14 02:23:16.416618 | orchestrator | Wednesday 14 May 2025 02:23:16 +0000 (0:00:00.145) 0:01:07.435 ********* 2025-05-14 02:23:16.904202 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:16.904363 | orchestrator | 2025-05-14 02:23:16.905359 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-14 02:23:16.905640 | orchestrator | Wednesday 14 May 2025 02:23:16 +0000 (0:00:00.494) 0:01:07.929 ********* 2025-05-14 02:23:17.404948 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:17.405343 | orchestrator | 2025-05-14 02:23:17.405974 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-14 02:23:17.406508 | orchestrator | Wednesday 14 May 2025 02:23:17 +0000 (0:00:00.499) 0:01:08.428 ********* 2025-05-14 02:23:17.930758 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:17.930870 | orchestrator | 2025-05-14 02:23:17.930993 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-14 02:23:17.931556 | orchestrator | Wednesday 14 May 2025 02:23:17 +0000 (0:00:00.526) 0:01:08.955 ********* 2025-05-14 02:23:18.076463 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:18.077258 | orchestrator | 2025-05-14 02:23:18.080841 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-14 02:23:18.081244 | orchestrator | Wednesday 14 May 2025 02:23:18 +0000 (0:00:00.145) 0:01:09.101 ********* 2025-05-14 02:23:18.212179 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:18.213642 | orchestrator | 2025-05-14 02:23:18.215364 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-14 02:23:18.216106 | orchestrator | Wednesday 14 May 2025 02:23:18 +0000 (0:00:00.136) 0:01:09.238 ********* 2025-05-14 02:23:18.327592 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:18.328242 | orchestrator | 2025-05-14 02:23:18.330824 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-14 02:23:18.331802 | orchestrator | Wednesday 14 May 2025 02:23:18 +0000 (0:00:00.112) 0:01:09.350 ********* 2025-05-14 02:23:18.596811 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:23:18.596926 | orchestrator |  "vgs_report": { 2025-05-14 02:23:18.597407 | orchestrator |  "vg": [] 2025-05-14 02:23:18.597687 | orchestrator |  } 2025-05-14 02:23:18.598176 | orchestrator | } 2025-05-14 02:23:18.598395 | orchestrator | 2025-05-14 02:23:18.598674 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-14 02:23:18.599547 | orchestrator | Wednesday 14 May 2025 02:23:18 +0000 (0:00:00.268) 0:01:09.619 ********* 2025-05-14 02:23:18.722223 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:18.722394 | orchestrator | 2025-05-14 02:23:18.723023 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-14 02:23:18.723309 | orchestrator | Wednesday 14 May 2025 02:23:18 +0000 (0:00:00.128) 0:01:09.747 ********* 2025-05-14 02:23:18.853885 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:18.854142 | orchestrator | 2025-05-14 02:23:18.855345 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-14 02:23:18.856454 | orchestrator | Wednesday 14 May 2025 02:23:18 +0000 (0:00:00.131) 0:01:09.878 ********* 2025-05-14 02:23:18.991982 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:18.992159 | orchestrator | 2025-05-14 02:23:18.992369 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-14 02:23:18.994480 | orchestrator | Wednesday 14 May 2025 02:23:18 +0000 (0:00:00.138) 0:01:10.017 ********* 2025-05-14 02:23:19.116985 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:19.117146 | orchestrator | 2025-05-14 02:23:19.118565 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-14 02:23:19.118595 | orchestrator | Wednesday 14 May 2025 02:23:19 +0000 (0:00:00.124) 0:01:10.141 ********* 2025-05-14 02:23:19.254632 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:19.254839 | orchestrator | 2025-05-14 02:23:19.254995 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-14 02:23:19.256324 | orchestrator | Wednesday 14 May 2025 02:23:19 +0000 (0:00:00.138) 0:01:10.280 ********* 2025-05-14 02:23:19.382133 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:19.382900 | orchestrator | 2025-05-14 02:23:19.384113 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-14 02:23:19.384788 | orchestrator | Wednesday 14 May 2025 02:23:19 +0000 (0:00:00.127) 0:01:10.407 ********* 2025-05-14 02:23:19.512853 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:19.514125 | orchestrator | 2025-05-14 02:23:19.514697 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-14 02:23:19.515815 | orchestrator | Wednesday 14 May 2025 02:23:19 +0000 (0:00:00.131) 0:01:10.538 ********* 2025-05-14 02:23:19.651764 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:19.651808 | orchestrator | 2025-05-14 02:23:19.652892 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-14 02:23:19.653216 | orchestrator | Wednesday 14 May 2025 02:23:19 +0000 (0:00:00.137) 0:01:10.676 ********* 2025-05-14 02:23:19.801306 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:19.801484 | orchestrator | 2025-05-14 02:23:19.801950 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-14 02:23:19.802254 | orchestrator | Wednesday 14 May 2025 02:23:19 +0000 (0:00:00.148) 0:01:10.825 ********* 2025-05-14 02:23:19.920953 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:19.921036 | orchestrator | 2025-05-14 02:23:19.921280 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-14 02:23:19.921518 | orchestrator | Wednesday 14 May 2025 02:23:19 +0000 (0:00:00.120) 0:01:10.945 ********* 2025-05-14 02:23:20.070947 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:20.071680 | orchestrator | 2025-05-14 02:23:20.072280 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-14 02:23:20.073004 | orchestrator | Wednesday 14 May 2025 02:23:20 +0000 (0:00:00.151) 0:01:11.096 ********* 2025-05-14 02:23:20.388092 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:20.388261 | orchestrator | 2025-05-14 02:23:20.389478 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-14 02:23:20.390192 | orchestrator | Wednesday 14 May 2025 02:23:20 +0000 (0:00:00.316) 0:01:11.413 ********* 2025-05-14 02:23:20.522220 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:20.522879 | orchestrator | 2025-05-14 02:23:20.522941 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-14 02:23:20.523262 | orchestrator | Wednesday 14 May 2025 02:23:20 +0000 (0:00:00.133) 0:01:11.547 ********* 2025-05-14 02:23:20.668242 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:20.668682 | orchestrator | 2025-05-14 02:23:20.670382 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-14 02:23:20.670411 | orchestrator | Wednesday 14 May 2025 02:23:20 +0000 (0:00:00.145) 0:01:11.693 ********* 2025-05-14 02:23:20.853023 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:20.853173 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:20.853475 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:20.854187 | orchestrator | 2025-05-14 02:23:20.854627 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-14 02:23:20.856805 | orchestrator | Wednesday 14 May 2025 02:23:20 +0000 (0:00:00.185) 0:01:11.879 ********* 2025-05-14 02:23:21.014008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:21.015381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:21.015433 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:21.016167 | orchestrator | 2025-05-14 02:23:21.016835 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-14 02:23:21.018191 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.159) 0:01:12.038 ********* 2025-05-14 02:23:21.164469 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:21.164814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:21.165159 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:21.165190 | orchestrator | 2025-05-14 02:23:21.165432 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-14 02:23:21.167185 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.150) 0:01:12.189 ********* 2025-05-14 02:23:21.310703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:21.310817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:21.310903 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:21.311437 | orchestrator | 2025-05-14 02:23:21.312052 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-14 02:23:21.312521 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.146) 0:01:12.336 ********* 2025-05-14 02:23:21.482230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:21.485231 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:21.485675 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:21.486190 | orchestrator | 2025-05-14 02:23:21.486972 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-14 02:23:21.487482 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.170) 0:01:12.506 ********* 2025-05-14 02:23:21.643654 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:21.645866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:21.645910 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:21.646284 | orchestrator | 2025-05-14 02:23:21.646925 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-14 02:23:21.647367 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.162) 0:01:12.668 ********* 2025-05-14 02:23:21.798846 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:21.799875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:21.799900 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:21.800397 | orchestrator | 2025-05-14 02:23:21.801051 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-14 02:23:21.801488 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.152) 0:01:12.821 ********* 2025-05-14 02:23:21.942449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:21.942526 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:21.942987 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:21.943848 | orchestrator | 2025-05-14 02:23:21.944444 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-14 02:23:21.945139 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.144) 0:01:12.966 ********* 2025-05-14 02:23:22.575195 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:22.576222 | orchestrator | 2025-05-14 02:23:22.577175 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-14 02:23:22.578118 | orchestrator | Wednesday 14 May 2025 02:23:22 +0000 (0:00:00.633) 0:01:13.599 ********* 2025-05-14 02:23:23.084617 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:23.085071 | orchestrator | 2025-05-14 02:23:23.085286 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-14 02:23:23.085947 | orchestrator | Wednesday 14 May 2025 02:23:23 +0000 (0:00:00.510) 0:01:14.110 ********* 2025-05-14 02:23:23.257960 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:23.258171 | orchestrator | 2025-05-14 02:23:23.260903 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-14 02:23:23.260937 | orchestrator | Wednesday 14 May 2025 02:23:23 +0000 (0:00:00.170) 0:01:14.280 ********* 2025-05-14 02:23:23.449267 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'vg_name': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'}) 2025-05-14 02:23:23.449377 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'vg_name': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'}) 2025-05-14 02:23:23.449986 | orchestrator | 2025-05-14 02:23:23.450484 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-14 02:23:23.453188 | orchestrator | Wednesday 14 May 2025 02:23:23 +0000 (0:00:00.193) 0:01:14.473 ********* 2025-05-14 02:23:23.644762 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:23.645020 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:23.646182 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:23.647538 | orchestrator | 2025-05-14 02:23:23.648387 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-14 02:23:23.650402 | orchestrator | Wednesday 14 May 2025 02:23:23 +0000 (0:00:00.196) 0:01:14.670 ********* 2025-05-14 02:23:23.856491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:23.856612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:23.857003 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:23.857381 | orchestrator | 2025-05-14 02:23:23.857998 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-14 02:23:23.858460 | orchestrator | Wednesday 14 May 2025 02:23:23 +0000 (0:00:00.210) 0:01:14.881 ********* 2025-05-14 02:23:24.055178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'})  2025-05-14 02:23:24.055691 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'})  2025-05-14 02:23:24.056804 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:24.058476 | orchestrator | 2025-05-14 02:23:24.058499 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-14 02:23:24.059586 | orchestrator | Wednesday 14 May 2025 02:23:24 +0000 (0:00:00.198) 0:01:15.079 ********* 2025-05-14 02:23:24.553099 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:23:24.553568 | orchestrator |  "lvm_report": { 2025-05-14 02:23:24.554825 | orchestrator |  "lv": [ 2025-05-14 02:23:24.556284 | orchestrator |  { 2025-05-14 02:23:24.558456 | orchestrator |  "lv_name": "osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca", 2025-05-14 02:23:24.559236 | orchestrator |  "vg_name": "ceph-03d77871-dede-5752-b4dd-afb6f86d8bca" 2025-05-14 02:23:24.560763 | orchestrator |  }, 2025-05-14 02:23:24.561428 | orchestrator |  { 2025-05-14 02:23:24.563197 | orchestrator |  "lv_name": "osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598", 2025-05-14 02:23:24.564072 | orchestrator |  "vg_name": "ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598" 2025-05-14 02:23:24.565136 | orchestrator |  } 2025-05-14 02:23:24.566227 | orchestrator |  ], 2025-05-14 02:23:24.567005 | orchestrator |  "pv": [ 2025-05-14 02:23:24.567782 | orchestrator |  { 2025-05-14 02:23:24.568184 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-14 02:23:24.569243 | orchestrator |  "vg_name": "ceph-03d77871-dede-5752-b4dd-afb6f86d8bca" 2025-05-14 02:23:24.569838 | orchestrator |  }, 2025-05-14 02:23:24.570558 | orchestrator |  { 2025-05-14 02:23:24.571200 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-14 02:23:24.572066 | orchestrator |  "vg_name": "ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598" 2025-05-14 02:23:24.572503 | orchestrator |  } 2025-05-14 02:23:24.572929 | orchestrator |  ] 2025-05-14 02:23:24.573607 | orchestrator |  } 2025-05-14 02:23:24.574307 | orchestrator | } 2025-05-14 02:23:24.575007 | orchestrator | 2025-05-14 02:23:24.577042 | orchestrator | 2025-05-14 02:23:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:23:24.577081 | orchestrator | 2025-05-14 02:23:24 | INFO  | Please wait and do not abort execution. 2025-05-14 02:23:24.577168 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:23:24.578596 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-14 02:23:24.579054 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-14 02:23:24.580050 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-14 02:23:24.580602 | orchestrator | 2025-05-14 02:23:24.582300 | orchestrator | 2025-05-14 02:23:24.582900 | orchestrator | 2025-05-14 02:23:24.583594 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:23:24.584216 | orchestrator | Wednesday 14 May 2025 02:23:24 +0000 (0:00:00.496) 0:01:15.576 ********* 2025-05-14 02:23:24.584924 | orchestrator | =============================================================================== 2025-05-14 02:23:24.585398 | orchestrator | Create block VGs -------------------------------------------------------- 5.77s 2025-05-14 02:23:24.585871 | orchestrator | Create block LVs -------------------------------------------------------- 4.04s 2025-05-14 02:23:24.586257 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.15s 2025-05-14 02:23:24.586818 | orchestrator | Print LVM report data --------------------------------------------------- 2.12s 2025-05-14 02:23:24.586988 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.66s 2025-05-14 02:23:24.587404 | orchestrator | Add known links to the list of available block devices ------------------ 1.63s 2025-05-14 02:23:24.587970 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.59s 2025-05-14 02:23:24.588367 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-05-14 02:23:24.588872 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-05-14 02:23:24.589056 | orchestrator | Add known partitions to the list of available block devices ------------- 1.41s 2025-05-14 02:23:24.589576 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.11s 2025-05-14 02:23:24.589997 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-05-14 02:23:24.590412 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-05-14 02:23:24.590834 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.74s 2025-05-14 02:23:24.591128 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.72s 2025-05-14 02:23:24.591600 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-05-14 02:23:24.591985 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-05-14 02:23:24.592300 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-05-14 02:23:24.592658 | orchestrator | Fail if number of OSDs exceeds num_osds for a DB+WAL VG ----------------- 0.65s 2025-05-14 02:23:24.593283 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-05-14 02:23:26.647004 | orchestrator | 2025-05-14 02:23:26 | INFO  | Task 7e30cdd5-e27e-4385-bc19-fb1d5480942b (facts) was prepared for execution. 2025-05-14 02:23:26.647086 | orchestrator | 2025-05-14 02:23:26 | INFO  | It takes a moment until task 7e30cdd5-e27e-4385-bc19-fb1d5480942b (facts) has been started and output is visible here. 2025-05-14 02:23:29.523461 | orchestrator | 2025-05-14 02:23:29.523676 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-14 02:23:29.525428 | orchestrator | 2025-05-14 02:23:29.528182 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-14 02:23:29.531052 | orchestrator | Wednesday 14 May 2025 02:23:29 +0000 (0:00:00.201) 0:00:00.201 ********* 2025-05-14 02:23:30.442883 | orchestrator | ok: [testbed-manager] 2025-05-14 02:23:30.442988 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:23:30.443002 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:23:30.443014 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:23:30.443025 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:30.443407 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:23:30.444257 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:30.445136 | orchestrator | 2025-05-14 02:23:30.445791 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-14 02:23:30.446303 | orchestrator | Wednesday 14 May 2025 02:23:30 +0000 (0:00:00.916) 0:00:01.118 ********* 2025-05-14 02:23:30.600242 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:23:30.675788 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:23:30.751221 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:23:30.835498 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:23:30.907171 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:31.677893 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:31.677997 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:31.678439 | orchestrator | 2025-05-14 02:23:31.682338 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 02:23:31.682360 | orchestrator | 2025-05-14 02:23:31.682366 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:23:31.682370 | orchestrator | Wednesday 14 May 2025 02:23:31 +0000 (0:00:01.239) 0:00:02.357 ********* 2025-05-14 02:23:36.319583 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:23:36.319952 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:23:36.323092 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:23:36.323130 | orchestrator | ok: [testbed-manager] 2025-05-14 02:23:36.323142 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:36.323154 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:23:36.323938 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:23:36.324434 | orchestrator | 2025-05-14 02:23:36.325154 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-14 02:23:36.325876 | orchestrator | 2025-05-14 02:23:36.326280 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-14 02:23:36.327087 | orchestrator | Wednesday 14 May 2025 02:23:36 +0000 (0:00:04.641) 0:00:06.999 ********* 2025-05-14 02:23:36.647605 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:23:36.733178 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:23:36.807839 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:23:36.891555 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:23:36.968617 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:37.008313 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:37.009681 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:23:37.010937 | orchestrator | 2025-05-14 02:23:37.012630 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:23:37.014396 | orchestrator | 2025-05-14 02:23:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:23:37.014419 | orchestrator | 2025-05-14 02:23:37 | INFO  | Please wait and do not abort execution. 2025-05-14 02:23:37.015988 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:23:37.016996 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:23:37.018439 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:23:37.019448 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:23:37.021053 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:23:37.021841 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:23:37.022670 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:23:37.023455 | orchestrator | 2025-05-14 02:23:37.024356 | orchestrator | Wednesday 14 May 2025 02:23:36 +0000 (0:00:00.688) 0:00:07.687 ********* 2025-05-14 02:23:37.024881 | orchestrator | =============================================================================== 2025-05-14 02:23:37.025532 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.64s 2025-05-14 02:23:37.026100 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-05-14 02:23:37.026431 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.92s 2025-05-14 02:23:37.027067 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.69s 2025-05-14 02:23:37.584133 | orchestrator | 2025-05-14 02:23:37.585581 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed May 14 02:23:37 UTC 2025 2025-05-14 02:23:37.585631 | orchestrator | 2025-05-14 02:23:38.899990 | orchestrator | 2025-05-14 02:23:38 | INFO  | Collection nutshell is prepared for execution 2025-05-14 02:23:38.900070 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [0] - dotfiles 2025-05-14 02:23:38.903542 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [0] - homer 2025-05-14 02:23:38.903568 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [0] - netdata 2025-05-14 02:23:38.903580 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [0] - openstackclient 2025-05-14 02:23:38.903591 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [0] - phpmyadmin 2025-05-14 02:23:38.903601 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [0] - common 2025-05-14 02:23:38.904859 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [1] -- loadbalancer 2025-05-14 02:23:38.904884 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [2] --- opensearch 2025-05-14 02:23:38.904895 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [2] --- mariadb-ng 2025-05-14 02:23:38.904906 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [3] ---- horizon 2025-05-14 02:23:38.904917 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [3] ---- keystone 2025-05-14 02:23:38.904927 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [4] ----- neutron 2025-05-14 02:23:38.904938 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [5] ------ wait-for-nova 2025-05-14 02:23:38.905106 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [5] ------ octavia 2025-05-14 02:23:38.905438 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [4] ----- barbican 2025-05-14 02:23:38.905459 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [4] ----- designate 2025-05-14 02:23:38.905471 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [4] ----- ironic 2025-05-14 02:23:38.905483 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [4] ----- placement 2025-05-14 02:23:38.905494 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [4] ----- magnum 2025-05-14 02:23:38.905747 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [1] -- openvswitch 2025-05-14 02:23:38.905768 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [2] --- ovn 2025-05-14 02:23:38.905857 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [1] -- memcached 2025-05-14 02:23:38.905872 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [1] -- redis 2025-05-14 02:23:38.906392 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [1] -- rabbitmq-ng 2025-05-14 02:23:38.906480 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [0] - kubernetes 2025-05-14 02:23:38.906496 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [1] -- kubeconfig 2025-05-14 02:23:38.906507 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [1] -- copy-kubeconfig 2025-05-14 02:23:38.906519 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [0] - ceph 2025-05-14 02:23:38.907354 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [1] -- ceph-pools 2025-05-14 02:23:38.907388 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [2] --- copy-ceph-keys 2025-05-14 02:23:38.907658 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [3] ---- cephclient 2025-05-14 02:23:38.907688 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-14 02:23:38.907789 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [4] ----- wait-for-keystone 2025-05-14 02:23:38.907803 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-14 02:23:38.907905 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [5] ------ glance 2025-05-14 02:23:38.907920 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [5] ------ cinder 2025-05-14 02:23:38.907932 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [5] ------ nova 2025-05-14 02:23:38.907943 | orchestrator | 2025-05-14 02:23:38 | INFO  | A [4] ----- prometheus 2025-05-14 02:23:38.907954 | orchestrator | 2025-05-14 02:23:38 | INFO  | D [5] ------ grafana 2025-05-14 02:23:39.026807 | orchestrator | 2025-05-14 02:23:39 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-14 02:23:39.026884 | orchestrator | 2025-05-14 02:23:39 | INFO  | Tasks are running in the background 2025-05-14 02:23:40.666562 | orchestrator | 2025-05-14 02:23:40 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-14 02:23:42.766971 | orchestrator | 2025-05-14 02:23:42 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:23:42.767322 | orchestrator | 2025-05-14 02:23:42 | INFO  | Task e8e37bd8-3924-42ae-8cd1-8ff9290133b6 is in state STARTED 2025-05-14 02:23:42.767893 | orchestrator | 2025-05-14 02:23:42 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:23:42.768508 | orchestrator | 2025-05-14 02:23:42 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:23:42.769135 | orchestrator | 2025-05-14 02:23:42 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:23:42.771446 | orchestrator | 2025-05-14 02:23:42 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:23:42.771469 | orchestrator | 2025-05-14 02:23:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:23:45.806550 | orchestrator | 2025-05-14 02:23:45 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:23:45.807666 | orchestrator | 2025-05-14 02:23:45 | INFO  | Task e8e37bd8-3924-42ae-8cd1-8ff9290133b6 is in state STARTED 2025-05-14 02:23:45.808050 | orchestrator | 2025-05-14 02:23:45 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:23:45.808503 | orchestrator | 2025-05-14 02:23:45 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:23:45.810300 | orchestrator | 2025-05-14 02:23:45 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:23:45.812742 | orchestrator | 2025-05-14 02:23:45 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:23:45.812806 | orchestrator | 2025-05-14 02:23:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:23:48.852598 | orchestrator | 2025-05-14 02:23:48 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:23:48.852660 | orchestrator | 2025-05-14 02:23:48 | INFO  | Task e8e37bd8-3924-42ae-8cd1-8ff9290133b6 is in state STARTED 2025-05-14 02:23:48.854307 | orchestrator | 2025-05-14 02:23:48 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:23:48.858228 | orchestrator | 2025-05-14 02:23:48 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:23:48.858298 | orchestrator | 2025-05-14 02:23:48 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:23:48.858312 | orchestrator | 2025-05-14 02:23:48 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:23:48.858324 | orchestrator | 2025-05-14 02:23:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:23:51.915609 | orchestrator | 2025-05-14 02:23:51 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:23:51.916777 | orchestrator | 2025-05-14 02:23:51 | INFO  | Task e8e37bd8-3924-42ae-8cd1-8ff9290133b6 is in state STARTED 2025-05-14 02:23:51.916811 | orchestrator | 2025-05-14 02:23:51 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:23:51.920003 | orchestrator | 2025-05-14 02:23:51 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:23:51.938289 | orchestrator | 2025-05-14 02:23:51 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:23:51.938365 | orchestrator | 2025-05-14 02:23:51 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:23:51.938379 | orchestrator | 2025-05-14 02:23:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:23:54.969073 | orchestrator | 2025-05-14 02:23:54 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:23:54.969326 | orchestrator | 2025-05-14 02:23:54 | INFO  | Task e8e37bd8-3924-42ae-8cd1-8ff9290133b6 is in state STARTED 2025-05-14 02:23:54.969353 | orchestrator | 2025-05-14 02:23:54 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:23:54.972596 | orchestrator | 2025-05-14 02:23:54 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:23:54.973163 | orchestrator | 2025-05-14 02:23:54 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:23:54.973477 | orchestrator | 2025-05-14 02:23:54 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:23:54.973499 | orchestrator | 2025-05-14 02:23:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:23:58.031314 | orchestrator | 2025-05-14 02:23:58 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:23:58.033703 | orchestrator | 2025-05-14 02:23:58 | INFO  | Task e8e37bd8-3924-42ae-8cd1-8ff9290133b6 is in state STARTED 2025-05-14 02:23:58.037654 | orchestrator | 2025-05-14 02:23:58 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:23:58.040123 | orchestrator | 2025-05-14 02:23:58 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:23:58.050090 | orchestrator | 2025-05-14 02:23:58 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:23:58.053967 | orchestrator | 2025-05-14 02:23:58 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:23:58.054009 | orchestrator | 2025-05-14 02:23:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:01.112647 | orchestrator | 2025-05-14 02:24:01 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:01.114484 | orchestrator | 2025-05-14 02:24:01.114521 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-14 02:24:01.114534 | orchestrator | 2025-05-14 02:24:01.114547 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-14 02:24:01.114560 | orchestrator | Wednesday 14 May 2025 02:23:47 +0000 (0:00:00.401) 0:00:00.401 ********* 2025-05-14 02:24:01.114572 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:24:01.114585 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:01.114597 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:24:01.114608 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:24:01.114620 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:24:01.114631 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:24:01.114643 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:24:01.114654 | orchestrator | 2025-05-14 02:24:01.114666 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-14 02:24:01.114695 | orchestrator | Wednesday 14 May 2025 02:23:51 +0000 (0:00:03.706) 0:00:04.107 ********* 2025-05-14 02:24:01.114707 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-14 02:24:01.114751 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-14 02:24:01.114763 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-14 02:24:01.114774 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-14 02:24:01.114785 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-14 02:24:01.114796 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-14 02:24:01.114850 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-14 02:24:01.114862 | orchestrator | 2025-05-14 02:24:01.114873 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-14 02:24:01.114884 | orchestrator | Wednesday 14 May 2025 02:23:53 +0000 (0:00:01.854) 0:00:05.962 ********* 2025-05-14 02:24:01.114901 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:23:52.012136', 'end': '2025-05-14 02:23:52.020038', 'delta': '0:00:00.007902', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:24:01.114928 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:23:51.936436', 'end': '2025-05-14 02:23:51.949545', 'delta': '0:00:00.013109', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:24:01.114941 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:23:52.260895', 'end': '2025-05-14 02:23:52.269969', 'delta': '0:00:00.009074', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:24:01.114976 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:23:52.435645', 'end': '2025-05-14 02:23:52.443833', 'delta': '0:00:00.008188', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:24:01.114998 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:23:52.646030', 'end': '2025-05-14 02:23:52.651028', 'delta': '0:00:00.004998', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:24:01.115010 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:23:52.825925', 'end': '2025-05-14 02:23:52.832265', 'delta': '0:00:00.006340', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:24:01.115026 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:23:52.861380', 'end': '2025-05-14 02:23:52.869902', 'delta': '0:00:00.008522', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:24:01.115038 | orchestrator | 2025-05-14 02:24:01.115049 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-14 02:24:01.115060 | orchestrator | Wednesday 14 May 2025 02:23:55 +0000 (0:00:02.173) 0:00:08.136 ********* 2025-05-14 02:24:01.115071 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-14 02:24:01.115083 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-14 02:24:01.115094 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-14 02:24:01.115108 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-14 02:24:01.115121 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-14 02:24:01.115133 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-14 02:24:01.115146 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-14 02:24:01.115158 | orchestrator | 2025-05-14 02:24:01.115170 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:24:01.115184 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:01.115206 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:01.115220 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:01.115240 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:01.115253 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:01.115266 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:01.115279 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:01.115292 | orchestrator | 2025-05-14 02:24:01.115305 | orchestrator | Wednesday 14 May 2025 02:23:58 +0000 (0:00:02.913) 0:00:11.049 ********* 2025-05-14 02:24:01.115318 | orchestrator | =============================================================================== 2025-05-14 02:24:01.115330 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.71s 2025-05-14 02:24:01.115343 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.91s 2025-05-14 02:24:01.115356 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.17s 2025-05-14 02:24:01.115368 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.85s 2025-05-14 02:24:01.115405 | orchestrator | 2025-05-14 02:24:01 | INFO  | Task e8e37bd8-3924-42ae-8cd1-8ff9290133b6 is in state SUCCESS 2025-05-14 02:24:01.115481 | orchestrator | 2025-05-14 02:24:01 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:01.120196 | orchestrator | 2025-05-14 02:24:01 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:01.124192 | orchestrator | 2025-05-14 02:24:01 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:01.125716 | orchestrator | 2025-05-14 02:24:01 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:24:01.128126 | orchestrator | 2025-05-14 02:24:01 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:01.128157 | orchestrator | 2025-05-14 02:24:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:04.187904 | orchestrator | 2025-05-14 02:24:04 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:04.189097 | orchestrator | 2025-05-14 02:24:04 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:04.189677 | orchestrator | 2025-05-14 02:24:04 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:04.191337 | orchestrator | 2025-05-14 02:24:04 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:04.193068 | orchestrator | 2025-05-14 02:24:04 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:24:04.194333 | orchestrator | 2025-05-14 02:24:04 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:04.194370 | orchestrator | 2025-05-14 02:24:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:07.246711 | orchestrator | 2025-05-14 02:24:07 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:07.246835 | orchestrator | 2025-05-14 02:24:07 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:07.248556 | orchestrator | 2025-05-14 02:24:07 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:07.250345 | orchestrator | 2025-05-14 02:24:07 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:07.255453 | orchestrator | 2025-05-14 02:24:07 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:24:07.259686 | orchestrator | 2025-05-14 02:24:07 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:07.260038 | orchestrator | 2025-05-14 02:24:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:10.364711 | orchestrator | 2025-05-14 02:24:10 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:10.364852 | orchestrator | 2025-05-14 02:24:10 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:10.364865 | orchestrator | 2025-05-14 02:24:10 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:10.364874 | orchestrator | 2025-05-14 02:24:10 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:10.364883 | orchestrator | 2025-05-14 02:24:10 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:24:10.364893 | orchestrator | 2025-05-14 02:24:10 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:10.364902 | orchestrator | 2025-05-14 02:24:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:13.430612 | orchestrator | 2025-05-14 02:24:13 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:13.432831 | orchestrator | 2025-05-14 02:24:13 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:13.434682 | orchestrator | 2025-05-14 02:24:13 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:13.435383 | orchestrator | 2025-05-14 02:24:13 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:13.436864 | orchestrator | 2025-05-14 02:24:13 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:24:13.437545 | orchestrator | 2025-05-14 02:24:13 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:13.437713 | orchestrator | 2025-05-14 02:24:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:16.507638 | orchestrator | 2025-05-14 02:24:16 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:16.509505 | orchestrator | 2025-05-14 02:24:16 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:16.510944 | orchestrator | 2025-05-14 02:24:16 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:16.511795 | orchestrator | 2025-05-14 02:24:16 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:16.515628 | orchestrator | 2025-05-14 02:24:16 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:24:16.515659 | orchestrator | 2025-05-14 02:24:16 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:16.515665 | orchestrator | 2025-05-14 02:24:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:19.564050 | orchestrator | 2025-05-14 02:24:19 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:19.567298 | orchestrator | 2025-05-14 02:24:19 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:19.567990 | orchestrator | 2025-05-14 02:24:19 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:19.571108 | orchestrator | 2025-05-14 02:24:19 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:19.574239 | orchestrator | 2025-05-14 02:24:19 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state STARTED 2025-05-14 02:24:19.574292 | orchestrator | 2025-05-14 02:24:19 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:19.574307 | orchestrator | 2025-05-14 02:24:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:22.659055 | orchestrator | 2025-05-14 02:24:22 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:22.660320 | orchestrator | 2025-05-14 02:24:22 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:22.664132 | orchestrator | 2025-05-14 02:24:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:22.667256 | orchestrator | 2025-05-14 02:24:22 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:22.670931 | orchestrator | 2025-05-14 02:24:22 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:22.671315 | orchestrator | 2025-05-14 02:24:22 | INFO  | Task 4794976f-380c-44c5-bb73-63a286f11189 is in state SUCCESS 2025-05-14 02:24:22.673247 | orchestrator | 2025-05-14 02:24:22 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:22.674658 | orchestrator | 2025-05-14 02:24:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:25.726416 | orchestrator | 2025-05-14 02:24:25 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:25.727388 | orchestrator | 2025-05-14 02:24:25 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:25.729235 | orchestrator | 2025-05-14 02:24:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:25.731597 | orchestrator | 2025-05-14 02:24:25 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:25.733067 | orchestrator | 2025-05-14 02:24:25 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:25.733099 | orchestrator | 2025-05-14 02:24:25 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:25.733111 | orchestrator | 2025-05-14 02:24:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:28.797795 | orchestrator | 2025-05-14 02:24:28 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:28.799444 | orchestrator | 2025-05-14 02:24:28 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:28.803429 | orchestrator | 2025-05-14 02:24:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:28.805763 | orchestrator | 2025-05-14 02:24:28 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:28.807796 | orchestrator | 2025-05-14 02:24:28 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:28.810344 | orchestrator | 2025-05-14 02:24:28 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:28.810447 | orchestrator | 2025-05-14 02:24:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:31.861671 | orchestrator | 2025-05-14 02:24:31 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:31.863220 | orchestrator | 2025-05-14 02:24:31 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:31.865713 | orchestrator | 2025-05-14 02:24:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:31.865841 | orchestrator | 2025-05-14 02:24:31 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:31.867302 | orchestrator | 2025-05-14 02:24:31 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:31.868662 | orchestrator | 2025-05-14 02:24:31 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:31.868854 | orchestrator | 2025-05-14 02:24:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:34.946276 | orchestrator | 2025-05-14 02:24:34 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:34.947351 | orchestrator | 2025-05-14 02:24:34 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:34.947812 | orchestrator | 2025-05-14 02:24:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:34.949452 | orchestrator | 2025-05-14 02:24:34 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:34.949914 | orchestrator | 2025-05-14 02:24:34 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:34.951127 | orchestrator | 2025-05-14 02:24:34 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:34.951207 | orchestrator | 2025-05-14 02:24:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:38.021283 | orchestrator | 2025-05-14 02:24:38 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:38.021616 | orchestrator | 2025-05-14 02:24:38 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:38.021649 | orchestrator | 2025-05-14 02:24:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:38.024194 | orchestrator | 2025-05-14 02:24:38 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:38.025327 | orchestrator | 2025-05-14 02:24:38 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:38.026490 | orchestrator | 2025-05-14 02:24:38 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:38.026543 | orchestrator | 2025-05-14 02:24:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:41.076408 | orchestrator | 2025-05-14 02:24:41 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state STARTED 2025-05-14 02:24:41.076505 | orchestrator | 2025-05-14 02:24:41 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:41.076519 | orchestrator | 2025-05-14 02:24:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:41.076531 | orchestrator | 2025-05-14 02:24:41 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:41.076562 | orchestrator | 2025-05-14 02:24:41 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:41.076573 | orchestrator | 2025-05-14 02:24:41 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:41.076585 | orchestrator | 2025-05-14 02:24:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:44.125230 | orchestrator | 2025-05-14 02:24:44 | INFO  | Task f7668c93-6856-40ca-b860-7f1ebae89df6 is in state SUCCESS 2025-05-14 02:24:44.125406 | orchestrator | 2025-05-14 02:24:44 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:44.125626 | orchestrator | 2025-05-14 02:24:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:44.125696 | orchestrator | 2025-05-14 02:24:44 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:44.129472 | orchestrator | 2025-05-14 02:24:44 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:44.129726 | orchestrator | 2025-05-14 02:24:44 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:44.129812 | orchestrator | 2025-05-14 02:24:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:47.180503 | orchestrator | 2025-05-14 02:24:47 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:47.180602 | orchestrator | 2025-05-14 02:24:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:47.180948 | orchestrator | 2025-05-14 02:24:47 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:47.184041 | orchestrator | 2025-05-14 02:24:47 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:47.187046 | orchestrator | 2025-05-14 02:24:47 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:47.187120 | orchestrator | 2025-05-14 02:24:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:50.272151 | orchestrator | 2025-05-14 02:24:50 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:50.274548 | orchestrator | 2025-05-14 02:24:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:50.275829 | orchestrator | 2025-05-14 02:24:50 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:50.277519 | orchestrator | 2025-05-14 02:24:50 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:50.279278 | orchestrator | 2025-05-14 02:24:50 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:50.279390 | orchestrator | 2025-05-14 02:24:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:53.346285 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:53.348604 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:53.351647 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:53.354577 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:53.354776 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state STARTED 2025-05-14 02:24:53.354796 | orchestrator | 2025-05-14 02:24:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:56.404581 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:56.404849 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:56.405969 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:56.407465 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:56.409323 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task 1f97fecc-34f1-4def-a8ff-005b19091678 is in state SUCCESS 2025-05-14 02:24:56.409423 | orchestrator | 2025-05-14 02:24:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:56.410804 | orchestrator | 2025-05-14 02:24:56.410841 | orchestrator | 2025-05-14 02:24:56.410854 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-14 02:24:56.410866 | orchestrator | 2025-05-14 02:24:56.410877 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-14 02:24:56.410894 | orchestrator | Wednesday 14 May 2025 02:23:46 +0000 (0:00:00.516) 0:00:00.516 ********* 2025-05-14 02:24:56.410905 | orchestrator | ok: [testbed-manager] => { 2025-05-14 02:24:56.410917 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-14 02:24:56.410929 | orchestrator | } 2025-05-14 02:24:56.410938 | orchestrator | 2025-05-14 02:24:56.410948 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-14 02:24:56.410957 | orchestrator | Wednesday 14 May 2025 02:23:47 +0000 (0:00:00.376) 0:00:00.892 ********* 2025-05-14 02:24:56.410967 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:56.410977 | orchestrator | 2025-05-14 02:24:56.410987 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-14 02:24:56.410996 | orchestrator | Wednesday 14 May 2025 02:23:48 +0000 (0:00:01.256) 0:00:02.149 ********* 2025-05-14 02:24:56.411006 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-14 02:24:56.411015 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-14 02:24:56.411025 | orchestrator | 2025-05-14 02:24:56.411034 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-14 02:24:56.411044 | orchestrator | Wednesday 14 May 2025 02:23:49 +0000 (0:00:01.094) 0:00:03.243 ********* 2025-05-14 02:24:56.411053 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.411119 | orchestrator | 2025-05-14 02:24:56.411131 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-14 02:24:56.411140 | orchestrator | Wednesday 14 May 2025 02:23:51 +0000 (0:00:02.384) 0:00:05.628 ********* 2025-05-14 02:24:56.411150 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.411160 | orchestrator | 2025-05-14 02:24:56.411170 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-14 02:24:56.411179 | orchestrator | Wednesday 14 May 2025 02:23:53 +0000 (0:00:01.466) 0:00:07.094 ********* 2025-05-14 02:24:56.411189 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-14 02:24:56.411199 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:56.411209 | orchestrator | 2025-05-14 02:24:56.411219 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-14 02:24:56.411228 | orchestrator | Wednesday 14 May 2025 02:24:18 +0000 (0:00:24.741) 0:00:31.836 ********* 2025-05-14 02:24:56.411238 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.411248 | orchestrator | 2025-05-14 02:24:56.411257 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:24:56.411267 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:56.411278 | orchestrator | 2025-05-14 02:24:56.411288 | orchestrator | Wednesday 14 May 2025 02:24:20 +0000 (0:00:01.939) 0:00:33.776 ********* 2025-05-14 02:24:56.411297 | orchestrator | =============================================================================== 2025-05-14 02:24:56.411306 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.74s 2025-05-14 02:24:56.411316 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.38s 2025-05-14 02:24:56.411325 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.94s 2025-05-14 02:24:56.411335 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.47s 2025-05-14 02:24:56.411344 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.26s 2025-05-14 02:24:56.411366 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.09s 2025-05-14 02:24:56.411376 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.38s 2025-05-14 02:24:56.411385 | orchestrator | 2025-05-14 02:24:56.411395 | orchestrator | 2025-05-14 02:24:56.411404 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-14 02:24:56.411414 | orchestrator | 2025-05-14 02:24:56.411423 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-14 02:24:56.411433 | orchestrator | Wednesday 14 May 2025 02:23:47 +0000 (0:00:00.397) 0:00:00.397 ********* 2025-05-14 02:24:56.411442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-14 02:24:56.411453 | orchestrator | 2025-05-14 02:24:56.411462 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-14 02:24:56.411472 | orchestrator | Wednesday 14 May 2025 02:23:47 +0000 (0:00:00.390) 0:00:00.788 ********* 2025-05-14 02:24:56.411481 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-14 02:24:56.411490 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-14 02:24:56.411500 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-14 02:24:56.411509 | orchestrator | 2025-05-14 02:24:56.411519 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-14 02:24:56.411529 | orchestrator | Wednesday 14 May 2025 02:23:49 +0000 (0:00:01.596) 0:00:02.385 ********* 2025-05-14 02:24:56.411539 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.411549 | orchestrator | 2025-05-14 02:24:56.411558 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-14 02:24:56.411568 | orchestrator | Wednesday 14 May 2025 02:23:50 +0000 (0:00:01.398) 0:00:03.784 ********* 2025-05-14 02:24:56.411578 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-14 02:24:56.411588 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:56.411598 | orchestrator | 2025-05-14 02:24:56.411618 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-14 02:24:56.411629 | orchestrator | Wednesday 14 May 2025 02:24:31 +0000 (0:00:41.064) 0:00:44.848 ********* 2025-05-14 02:24:56.411638 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.411648 | orchestrator | 2025-05-14 02:24:56.411662 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-14 02:24:56.411672 | orchestrator | Wednesday 14 May 2025 02:24:34 +0000 (0:00:02.527) 0:00:47.376 ********* 2025-05-14 02:24:56.411682 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:56.411692 | orchestrator | 2025-05-14 02:24:56.411702 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-14 02:24:56.411711 | orchestrator | Wednesday 14 May 2025 02:24:35 +0000 (0:00:01.272) 0:00:48.649 ********* 2025-05-14 02:24:56.411721 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.411731 | orchestrator | 2025-05-14 02:24:56.411763 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-14 02:24:56.411774 | orchestrator | Wednesday 14 May 2025 02:24:38 +0000 (0:00:02.584) 0:00:51.233 ********* 2025-05-14 02:24:56.411783 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.411793 | orchestrator | 2025-05-14 02:24:56.411803 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-14 02:24:56.411813 | orchestrator | Wednesday 14 May 2025 02:24:39 +0000 (0:00:01.207) 0:00:52.441 ********* 2025-05-14 02:24:56.411822 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.411832 | orchestrator | 2025-05-14 02:24:56.411841 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-14 02:24:56.411851 | orchestrator | Wednesday 14 May 2025 02:24:40 +0000 (0:00:00.656) 0:00:53.097 ********* 2025-05-14 02:24:56.411861 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:56.411876 | orchestrator | 2025-05-14 02:24:56.411886 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:24:56.411896 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:56.411906 | orchestrator | 2025-05-14 02:24:56.411916 | orchestrator | Wednesday 14 May 2025 02:24:40 +0000 (0:00:00.518) 0:00:53.616 ********* 2025-05-14 02:24:56.411925 | orchestrator | =============================================================================== 2025-05-14 02:24:56.411935 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 41.06s 2025-05-14 02:24:56.411945 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.58s 2025-05-14 02:24:56.411954 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.53s 2025-05-14 02:24:56.411964 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.60s 2025-05-14 02:24:56.411974 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.41s 2025-05-14 02:24:56.412016 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.27s 2025-05-14 02:24:56.412026 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.21s 2025-05-14 02:24:56.412035 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.66s 2025-05-14 02:24:56.412045 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.52s 2025-05-14 02:24:56.412055 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.39s 2025-05-14 02:24:56.412064 | orchestrator | 2025-05-14 02:24:56.412074 | orchestrator | 2025-05-14 02:24:56.412084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:24:56.412093 | orchestrator | 2025-05-14 02:24:56.412103 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:24:56.412113 | orchestrator | Wednesday 14 May 2025 02:23:46 +0000 (0:00:00.416) 0:00:00.416 ********* 2025-05-14 02:24:56.412123 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-14 02:24:56.412132 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-14 02:24:56.412142 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-14 02:24:56.412152 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-14 02:24:56.412161 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-14 02:24:56.412171 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-14 02:24:56.412180 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-14 02:24:56.412190 | orchestrator | 2025-05-14 02:24:56.412200 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-14 02:24:56.412209 | orchestrator | 2025-05-14 02:24:56.412219 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-14 02:24:56.412228 | orchestrator | Wednesday 14 May 2025 02:23:48 +0000 (0:00:01.566) 0:00:01.983 ********* 2025-05-14 02:24:56.412251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:24:56.412264 | orchestrator | 2025-05-14 02:24:56.412273 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-14 02:24:56.412283 | orchestrator | Wednesday 14 May 2025 02:23:50 +0000 (0:00:01.924) 0:00:03.908 ********* 2025-05-14 02:24:56.412293 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:56.412302 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:24:56.412312 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:24:56.412322 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:24:56.412331 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:24:56.412341 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:56.412351 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:56.412366 | orchestrator | 2025-05-14 02:24:56.412376 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-14 02:24:56.412392 | orchestrator | Wednesday 14 May 2025 02:23:52 +0000 (0:00:02.387) 0:00:06.295 ********* 2025-05-14 02:24:56.412402 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:56.412411 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:24:56.412421 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:24:56.412431 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:24:56.412440 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:24:56.412454 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:56.412464 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:56.412474 | orchestrator | 2025-05-14 02:24:56.412484 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-14 02:24:56.412493 | orchestrator | Wednesday 14 May 2025 02:23:55 +0000 (0:00:03.450) 0:00:09.745 ********* 2025-05-14 02:24:56.412503 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.412513 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:24:56.412523 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:24:56.412532 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:24:56.412542 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:24:56.412551 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:24:56.412561 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:24:56.412570 | orchestrator | 2025-05-14 02:24:56.412580 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-14 02:24:56.412590 | orchestrator | Wednesday 14 May 2025 02:23:57 +0000 (0:00:02.073) 0:00:11.819 ********* 2025-05-14 02:24:56.412599 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.412609 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:24:56.412618 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:24:56.412628 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:24:56.412638 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:24:56.412647 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:24:56.412656 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:24:56.412666 | orchestrator | 2025-05-14 02:24:56.412676 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-14 02:24:56.412685 | orchestrator | Wednesday 14 May 2025 02:24:07 +0000 (0:00:09.709) 0:00:21.528 ********* 2025-05-14 02:24:56.412695 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:24:56.412704 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:24:56.412714 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:24:56.412723 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:24:56.412733 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:24:56.412756 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:24:56.412765 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.412775 | orchestrator | 2025-05-14 02:24:56.412785 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-14 02:24:56.412794 | orchestrator | Wednesday 14 May 2025 02:24:27 +0000 (0:00:19.557) 0:00:41.085 ********* 2025-05-14 02:24:56.412805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:24:56.412817 | orchestrator | 2025-05-14 02:24:56.412826 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-14 02:24:56.412836 | orchestrator | Wednesday 14 May 2025 02:24:29 +0000 (0:00:02.485) 0:00:43.571 ********* 2025-05-14 02:24:56.412846 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-14 02:24:56.412856 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-14 02:24:56.412865 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-14 02:24:56.412875 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-14 02:24:56.412884 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-14 02:24:56.412894 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-14 02:24:56.412912 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-14 02:24:56.412922 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-14 02:24:56.412931 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-14 02:24:56.412941 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-14 02:24:56.412950 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-14 02:24:56.412960 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-14 02:24:56.412969 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-14 02:24:56.412979 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-14 02:24:56.412988 | orchestrator | 2025-05-14 02:24:56.412998 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-14 02:24:56.413008 | orchestrator | Wednesday 14 May 2025 02:24:37 +0000 (0:00:08.026) 0:00:51.598 ********* 2025-05-14 02:24:56.413017 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:56.413027 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:24:56.413036 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:24:56.413046 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:24:56.413055 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:24:56.413065 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:56.413074 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:56.413083 | orchestrator | 2025-05-14 02:24:56.413093 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-14 02:24:56.413103 | orchestrator | Wednesday 14 May 2025 02:24:39 +0000 (0:00:02.212) 0:00:53.811 ********* 2025-05-14 02:24:56.413112 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.413122 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:24:56.413131 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:24:56.413141 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:24:56.413150 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:24:56.413160 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:24:56.413169 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:24:56.413179 | orchestrator | 2025-05-14 02:24:56.413188 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-14 02:24:56.413198 | orchestrator | Wednesday 14 May 2025 02:24:42 +0000 (0:00:02.133) 0:00:55.945 ********* 2025-05-14 02:24:56.413207 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:24:56.413217 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:24:56.413226 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:24:56.413236 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:56.413250 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:24:56.413260 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:56.413269 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:56.413279 | orchestrator | 2025-05-14 02:24:56.413289 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-14 02:24:56.413298 | orchestrator | Wednesday 14 May 2025 02:24:43 +0000 (0:00:01.811) 0:00:57.756 ********* 2025-05-14 02:24:56.413308 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:56.413318 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:24:56.413328 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:24:56.413337 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:24:56.413346 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:24:56.413356 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:56.413365 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:56.413375 | orchestrator | 2025-05-14 02:24:56.413385 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-14 02:24:56.413394 | orchestrator | Wednesday 14 May 2025 02:24:45 +0000 (0:00:01.669) 0:00:59.426 ********* 2025-05-14 02:24:56.413404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-14 02:24:56.413415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:24:56.413431 | orchestrator | 2025-05-14 02:24:56.413441 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-14 02:24:56.413451 | orchestrator | Wednesday 14 May 2025 02:24:46 +0000 (0:00:01.179) 0:01:00.605 ********* 2025-05-14 02:24:56.413460 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.413470 | orchestrator | 2025-05-14 02:24:56.413479 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-14 02:24:56.413489 | orchestrator | Wednesday 14 May 2025 02:24:50 +0000 (0:00:03.869) 0:01:04.474 ********* 2025-05-14 02:24:56.413498 | orchestrator | changed: [testbed-manager] 2025-05-14 02:24:56.413508 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:24:56.413518 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:24:56.413527 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:24:56.413537 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:24:56.413546 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:24:56.413556 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:24:56.413565 | orchestrator | 2025-05-14 02:24:56.413575 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:24:56.413584 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:56.413594 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:56.413604 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:56.413613 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:56.413623 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:56.413633 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:56.413642 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:24:56.413652 | orchestrator | 2025-05-14 02:24:56.413661 | orchestrator | Wednesday 14 May 2025 02:24:54 +0000 (0:00:03.802) 0:01:08.277 ********* 2025-05-14 02:24:56.413671 | orchestrator | =============================================================================== 2025-05-14 02:24:56.413681 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.56s 2025-05-14 02:24:56.413690 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.71s 2025-05-14 02:24:56.413700 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 8.03s 2025-05-14 02:24:56.413709 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.87s 2025-05-14 02:24:56.413719 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.80s 2025-05-14 02:24:56.413728 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.45s 2025-05-14 02:24:56.413761 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.49s 2025-05-14 02:24:56.413772 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.39s 2025-05-14 02:24:56.413781 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.21s 2025-05-14 02:24:56.413791 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.13s 2025-05-14 02:24:56.413800 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.07s 2025-05-14 02:24:56.413810 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.92s 2025-05-14 02:24:56.413824 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.81s 2025-05-14 02:24:56.413834 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.67s 2025-05-14 02:24:56.413850 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.57s 2025-05-14 02:24:56.413860 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.18s 2025-05-14 02:24:59.442982 | orchestrator | 2025-05-14 02:24:59 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:24:59.443832 | orchestrator | 2025-05-14 02:24:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:24:59.445121 | orchestrator | 2025-05-14 02:24:59 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:24:59.446509 | orchestrator | 2025-05-14 02:24:59 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:24:59.446547 | orchestrator | 2025-05-14 02:24:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:02.486308 | orchestrator | 2025-05-14 02:25:02 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:25:02.489718 | orchestrator | 2025-05-14 02:25:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:02.490039 | orchestrator | 2025-05-14 02:25:02 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:02.490839 | orchestrator | 2025-05-14 02:25:02 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:02.490862 | orchestrator | 2025-05-14 02:25:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:05.526984 | orchestrator | 2025-05-14 02:25:05 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state STARTED 2025-05-14 02:25:05.534440 | orchestrator | 2025-05-14 02:25:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:05.534928 | orchestrator | 2025-05-14 02:25:05 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:05.537438 | orchestrator | 2025-05-14 02:25:05 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:05.537517 | orchestrator | 2025-05-14 02:25:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:08.582911 | orchestrator | 2025-05-14 02:25:08 | INFO  | Task e4048610-865f-4f1a-8a07-aa50eebabcfd is in state SUCCESS 2025-05-14 02:25:08.585180 | orchestrator | 2025-05-14 02:25:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:08.585628 | orchestrator | 2025-05-14 02:25:08 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:08.588790 | orchestrator | 2025-05-14 02:25:08 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:08.589928 | orchestrator | 2025-05-14 02:25:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:11.623549 | orchestrator | 2025-05-14 02:25:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:11.623694 | orchestrator | 2025-05-14 02:25:11 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:11.624201 | orchestrator | 2025-05-14 02:25:11 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:11.624263 | orchestrator | 2025-05-14 02:25:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:14.676952 | orchestrator | 2025-05-14 02:25:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:14.677241 | orchestrator | 2025-05-14 02:25:14 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:14.677937 | orchestrator | 2025-05-14 02:25:14 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:14.677966 | orchestrator | 2025-05-14 02:25:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:17.725533 | orchestrator | 2025-05-14 02:25:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:17.728363 | orchestrator | 2025-05-14 02:25:17 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:17.730326 | orchestrator | 2025-05-14 02:25:17 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:17.730677 | orchestrator | 2025-05-14 02:25:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:20.785417 | orchestrator | 2025-05-14 02:25:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:20.785585 | orchestrator | 2025-05-14 02:25:20 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:20.786486 | orchestrator | 2025-05-14 02:25:20 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:20.786513 | orchestrator | 2025-05-14 02:25:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:23.839624 | orchestrator | 2025-05-14 02:25:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:23.842564 | orchestrator | 2025-05-14 02:25:23 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:23.844351 | orchestrator | 2025-05-14 02:25:23 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:23.844413 | orchestrator | 2025-05-14 02:25:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:26.891924 | orchestrator | 2025-05-14 02:25:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:26.893328 | orchestrator | 2025-05-14 02:25:26 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:26.895566 | orchestrator | 2025-05-14 02:25:26 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:26.896050 | orchestrator | 2025-05-14 02:25:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:29.942557 | orchestrator | 2025-05-14 02:25:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:29.944360 | orchestrator | 2025-05-14 02:25:29 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:29.946818 | orchestrator | 2025-05-14 02:25:29 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:29.946976 | orchestrator | 2025-05-14 02:25:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:32.999983 | orchestrator | 2025-05-14 02:25:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:33.000985 | orchestrator | 2025-05-14 02:25:32 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:33.003284 | orchestrator | 2025-05-14 02:25:33 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:33.003326 | orchestrator | 2025-05-14 02:25:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:36.057363 | orchestrator | 2025-05-14 02:25:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:36.057429 | orchestrator | 2025-05-14 02:25:36 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:36.057838 | orchestrator | 2025-05-14 02:25:36 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:36.058194 | orchestrator | 2025-05-14 02:25:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:39.100142 | orchestrator | 2025-05-14 02:25:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:39.100270 | orchestrator | 2025-05-14 02:25:39 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:39.100395 | orchestrator | 2025-05-14 02:25:39 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:39.100411 | orchestrator | 2025-05-14 02:25:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:42.145384 | orchestrator | 2025-05-14 02:25:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:42.146466 | orchestrator | 2025-05-14 02:25:42 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:42.153452 | orchestrator | 2025-05-14 02:25:42 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:42.153500 | orchestrator | 2025-05-14 02:25:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:45.194652 | orchestrator | 2025-05-14 02:25:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:45.194738 | orchestrator | 2025-05-14 02:25:45 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:45.194837 | orchestrator | 2025-05-14 02:25:45 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:45.195359 | orchestrator | 2025-05-14 02:25:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:48.233355 | orchestrator | 2025-05-14 02:25:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:48.233450 | orchestrator | 2025-05-14 02:25:48 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:48.234139 | orchestrator | 2025-05-14 02:25:48 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:48.234227 | orchestrator | 2025-05-14 02:25:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:51.277095 | orchestrator | 2025-05-14 02:25:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:51.278355 | orchestrator | 2025-05-14 02:25:51 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:51.278658 | orchestrator | 2025-05-14 02:25:51 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:51.278883 | orchestrator | 2025-05-14 02:25:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:54.325655 | orchestrator | 2025-05-14 02:25:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:54.326900 | orchestrator | 2025-05-14 02:25:54 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:54.328330 | orchestrator | 2025-05-14 02:25:54 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:54.328357 | orchestrator | 2025-05-14 02:25:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:57.379014 | orchestrator | 2025-05-14 02:25:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:25:57.379246 | orchestrator | 2025-05-14 02:25:57 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:25:57.380257 | orchestrator | 2025-05-14 02:25:57 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:25:57.380402 | orchestrator | 2025-05-14 02:25:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:00.422759 | orchestrator | 2025-05-14 02:26:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:00.425217 | orchestrator | 2025-05-14 02:26:00 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:00.427730 | orchestrator | 2025-05-14 02:26:00 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:26:00.427854 | orchestrator | 2025-05-14 02:26:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:03.467960 | orchestrator | 2025-05-14 02:26:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:03.468707 | orchestrator | 2025-05-14 02:26:03 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:03.469675 | orchestrator | 2025-05-14 02:26:03 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state STARTED 2025-05-14 02:26:03.469922 | orchestrator | 2025-05-14 02:26:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:06.519681 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:06.522353 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:06.524968 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task ac6eebbf-8464-4b32-b20e-d2f9cdf44b2d is in state STARTED 2025-05-14 02:26:06.529377 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:06.532226 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task 987ce73f-738c-4c39-b289-b61b3ebf166b is in state SUCCESS 2025-05-14 02:26:06.534211 | orchestrator | 2025-05-14 02:26:06.534378 | orchestrator | 2025-05-14 02:26:06.534395 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-14 02:26:06.534408 | orchestrator | 2025-05-14 02:26:06.534419 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-14 02:26:06.534431 | orchestrator | Wednesday 14 May 2025 02:24:03 +0000 (0:00:00.225) 0:00:00.225 ********* 2025-05-14 02:26:06.534442 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:06.534454 | orchestrator | 2025-05-14 02:26:06.534464 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-14 02:26:06.534475 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.843) 0:00:01.069 ********* 2025-05-14 02:26:06.534486 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-14 02:26:06.534497 | orchestrator | 2025-05-14 02:26:06.534508 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-14 02:26:06.534519 | orchestrator | Wednesday 14 May 2025 02:24:05 +0000 (0:00:00.550) 0:00:01.619 ********* 2025-05-14 02:26:06.534530 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:06.534542 | orchestrator | 2025-05-14 02:26:06.534553 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-14 02:26:06.534564 | orchestrator | Wednesday 14 May 2025 02:24:06 +0000 (0:00:01.384) 0:00:03.004 ********* 2025-05-14 02:26:06.534575 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-14 02:26:06.534586 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:06.534596 | orchestrator | 2025-05-14 02:26:06.534607 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-14 02:26:06.534618 | orchestrator | Wednesday 14 May 2025 02:25:01 +0000 (0:00:55.502) 0:00:58.506 ********* 2025-05-14 02:26:06.534629 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:06.534640 | orchestrator | 2025-05-14 02:26:06.534650 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:26:06.534689 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:06.534703 | orchestrator | 2025-05-14 02:26:06.534714 | orchestrator | Wednesday 14 May 2025 02:25:05 +0000 (0:00:03.363) 0:01:01.870 ********* 2025-05-14 02:26:06.534725 | orchestrator | =============================================================================== 2025-05-14 02:26:06.534736 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 55.50s 2025-05-14 02:26:06.534747 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.36s 2025-05-14 02:26:06.534758 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.38s 2025-05-14 02:26:06.534793 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.84s 2025-05-14 02:26:06.534806 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.55s 2025-05-14 02:26:06.534819 | orchestrator | 2025-05-14 02:26:06.534832 | orchestrator | 2025-05-14 02:26:06.534845 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-14 02:26:06.534858 | orchestrator | 2025-05-14 02:26:06.534871 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-14 02:26:06.534884 | orchestrator | Wednesday 14 May 2025 02:23:42 +0000 (0:00:00.387) 0:00:00.387 ********* 2025-05-14 02:26:06.534897 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:26:06.534911 | orchestrator | 2025-05-14 02:26:06.534924 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-14 02:26:06.534937 | orchestrator | Wednesday 14 May 2025 02:23:44 +0000 (0:00:01.925) 0:00:02.312 ********* 2025-05-14 02:26:06.534949 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:26:06.534962 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:26:06.534975 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:26:06.534988 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:26:06.535001 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:26:06.535014 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:26:06.535028 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:26:06.535041 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:26:06.535054 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:26:06.535067 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:26:06.535080 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:26:06.535092 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:26:06.535106 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:26:06.535119 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:26:06.535132 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:26:06.535146 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:26:06.535158 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:26:06.535184 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:26:06.535196 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:26:06.535216 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:26:06.535227 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:26:06.535238 | orchestrator | 2025-05-14 02:26:06.535250 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-14 02:26:06.535262 | orchestrator | Wednesday 14 May 2025 02:23:47 +0000 (0:00:03.543) 0:00:05.856 ********* 2025-05-14 02:26:06.535273 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:26:06.535285 | orchestrator | 2025-05-14 02:26:06.535296 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-14 02:26:06.535307 | orchestrator | Wednesday 14 May 2025 02:23:49 +0000 (0:00:01.823) 0:00:07.680 ********* 2025-05-14 02:26:06.535328 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.535343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.535356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.535367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.535379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.535390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.535417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535445 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.535510 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535554 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.535652 | orchestrator | 2025-05-14 02:26:06.535664 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-14 02:26:06.535675 | orchestrator | Wednesday 14 May 2025 02:23:54 +0000 (0:00:05.275) 0:00:12.956 ********* 2025-05-14 02:26:06.535694 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.535707 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.535723 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.535735 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:26:06.535746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.535758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.535801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.535821 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:26:06.535833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.535859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.535871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.535882 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:26:06.535899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.535911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.535923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.535934 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:26:06.535945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.535963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.535975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.535986 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:26:06.536004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.536016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.536032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.536043 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:26:06.536055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.536066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.536086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.536097 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:26:06.536108 | orchestrator | 2025-05-14 02:26:06.536119 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-14 02:26:06.536130 | orchestrator | Wednesday 14 May 2025 02:23:56 +0000 (0:00:01.591) 0:00:14.548 ********* 2025-05-14 02:26:06.536142 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.536160 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.536172 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.536183 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:26:06.536200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.536212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.536223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.536241 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:26:06.536252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.536264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.537318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.537351 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:26:06.537362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.537380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.537391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.537401 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:26:06.537411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.537431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.537442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.537452 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:26:06.537462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.537482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.537492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.537503 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:26:06.537516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:26:06.537527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.537544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.537554 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:26:06.537564 | orchestrator | 2025-05-14 02:26:06.537574 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-14 02:26:06.537583 | orchestrator | Wednesday 14 May 2025 02:23:59 +0000 (0:00:03.295) 0:00:17.843 ********* 2025-05-14 02:26:06.537593 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:26:06.537603 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:26:06.537612 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:26:06.537622 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:26:06.537631 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:26:06.537641 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:26:06.537650 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:26:06.537660 | orchestrator | 2025-05-14 02:26:06.537670 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-14 02:26:06.537679 | orchestrator | Wednesday 14 May 2025 02:24:01 +0000 (0:00:01.267) 0:00:19.111 ********* 2025-05-14 02:26:06.537689 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:26:06.537699 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:26:06.537708 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:26:06.537718 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:26:06.537727 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:26:06.537736 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:26:06.537746 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:26:06.537756 | orchestrator | 2025-05-14 02:26:06.537786 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-14 02:26:06.537797 | orchestrator | Wednesday 14 May 2025 02:24:01 +0000 (0:00:00.871) 0:00:19.982 ********* 2025-05-14 02:26:06.537807 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:06.537817 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:06.537826 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:06.537836 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:06.537846 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:06.537855 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:06.537865 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:06.537875 | orchestrator | 2025-05-14 02:26:06.537885 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-14 02:26:06.537894 | orchestrator | Wednesday 14 May 2025 02:24:41 +0000 (0:00:39.929) 0:00:59.912 ********* 2025-05-14 02:26:06.537904 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:06.537919 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:26:06.537929 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:06.537939 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:26:06.537948 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:26:06.537958 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:26:06.537968 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:26:06.537977 | orchestrator | 2025-05-14 02:26:06.538000 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-14 02:26:06.538010 | orchestrator | Wednesday 14 May 2025 02:24:44 +0000 (0:00:02.525) 0:01:02.437 ********* 2025-05-14 02:26:06.538078 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:06.538088 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:06.538098 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:26:06.538108 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:26:06.538117 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:26:06.538127 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:26:06.538136 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:26:06.538146 | orchestrator | 2025-05-14 02:26:06.538156 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-14 02:26:06.538166 | orchestrator | Wednesday 14 May 2025 02:24:45 +0000 (0:00:01.074) 0:01:03.511 ********* 2025-05-14 02:26:06.538175 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:26:06.538185 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:26:06.538195 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:26:06.538204 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:26:06.538214 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:26:06.538223 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:26:06.538233 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:26:06.538243 | orchestrator | 2025-05-14 02:26:06.538253 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-14 02:26:06.538262 | orchestrator | Wednesday 14 May 2025 02:24:46 +0000 (0:00:00.828) 0:01:04.340 ********* 2025-05-14 02:26:06.538272 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:26:06.538282 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:26:06.538291 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:26:06.538301 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:26:06.538311 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:26:06.538320 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:26:06.538330 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:26:06.538339 | orchestrator | 2025-05-14 02:26:06.538349 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-14 02:26:06.538359 | orchestrator | Wednesday 14 May 2025 02:24:47 +0000 (0:00:00.693) 0:01:05.034 ********* 2025-05-14 02:26:06.538369 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.538380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.538390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.538401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.538441 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.538472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.538493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.538540 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538644 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.538664 | orchestrator | 2025-05-14 02:26:06.538674 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-14 02:26:06.538684 | orchestrator | Wednesday 14 May 2025 02:24:53 +0000 (0:00:06.027) 0:01:11.061 ********* 2025-05-14 02:26:06.538694 | orchestrator | [WARNING]: Skipped 2025-05-14 02:26:06.538704 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-14 02:26:06.538713 | orchestrator | to this access issue: 2025-05-14 02:26:06.538723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-14 02:26:06.538733 | orchestrator | directory 2025-05-14 02:26:06.538742 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:26:06.538752 | orchestrator | 2025-05-14 02:26:06.538761 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-14 02:26:06.538791 | orchestrator | Wednesday 14 May 2025 02:24:54 +0000 (0:00:01.153) 0:01:12.215 ********* 2025-05-14 02:26:06.538801 | orchestrator | [WARNING]: Skipped 2025-05-14 02:26:06.538810 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-14 02:26:06.538820 | orchestrator | to this access issue: 2025-05-14 02:26:06.538833 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-14 02:26:06.538843 | orchestrator | directory 2025-05-14 02:26:06.538853 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:26:06.538862 | orchestrator | 2025-05-14 02:26:06.538872 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-14 02:26:06.538882 | orchestrator | Wednesday 14 May 2025 02:24:54 +0000 (0:00:00.715) 0:01:12.931 ********* 2025-05-14 02:26:06.538892 | orchestrator | [WARNING]: Skipped 2025-05-14 02:26:06.538927 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-14 02:26:06.538937 | orchestrator | to this access issue: 2025-05-14 02:26:06.538947 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-14 02:26:06.538956 | orchestrator | directory 2025-05-14 02:26:06.538966 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:26:06.538976 | orchestrator | 2025-05-14 02:26:06.538986 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-14 02:26:06.538995 | orchestrator | Wednesday 14 May 2025 02:24:55 +0000 (0:00:00.690) 0:01:13.621 ********* 2025-05-14 02:26:06.539005 | orchestrator | [WARNING]: Skipped 2025-05-14 02:26:06.539021 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-14 02:26:06.539031 | orchestrator | to this access issue: 2025-05-14 02:26:06.539040 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-14 02:26:06.539050 | orchestrator | directory 2025-05-14 02:26:06.539060 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:26:06.539069 | orchestrator | 2025-05-14 02:26:06.539079 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-14 02:26:06.539089 | orchestrator | Wednesday 14 May 2025 02:24:56 +0000 (0:00:00.782) 0:01:14.403 ********* 2025-05-14 02:26:06.539098 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:06.539108 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:06.539118 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:06.539127 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:06.539137 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:06.539146 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:06.539156 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:06.539166 | orchestrator | 2025-05-14 02:26:06.539175 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-14 02:26:06.539185 | orchestrator | Wednesday 14 May 2025 02:25:00 +0000 (0:00:04.149) 0:01:18.553 ********* 2025-05-14 02:26:06.539195 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:26:06.539205 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:26:06.539214 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:26:06.539224 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:26:06.539233 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:26:06.539243 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:26:06.539253 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:26:06.539262 | orchestrator | 2025-05-14 02:26:06.539272 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-14 02:26:06.539282 | orchestrator | Wednesday 14 May 2025 02:25:03 +0000 (0:00:02.971) 0:01:21.524 ********* 2025-05-14 02:26:06.539291 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:06.539301 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:06.539311 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:06.539321 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:06.539330 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:06.539345 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:06.539355 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:06.539365 | orchestrator | 2025-05-14 02:26:06.539374 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-14 02:26:06.539384 | orchestrator | Wednesday 14 May 2025 02:25:05 +0000 (0:00:02.478) 0:01:24.002 ********* 2025-05-14 02:26:06.539394 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.539409 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.539425 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.539436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.539447 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.539469 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.539484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.539495 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.539506 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.539529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.539540 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.539550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.539560 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.539570 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.539591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.539602 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.539618 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.539632 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.539642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:26:06.539653 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.539663 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.539673 | orchestrator | 2025-05-14 02:26:06.539683 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-14 02:26:06.539692 | orchestrator | Wednesday 14 May 2025 02:25:08 +0000 (0:00:02.025) 0:01:26.027 ********* 2025-05-14 02:26:06.539702 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:26:06.539712 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:26:06.539722 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:26:06.539731 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:26:06.539741 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:26:06.539750 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:26:06.539760 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:26:06.539822 | orchestrator | 2025-05-14 02:26:06.539833 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-14 02:26:06.539862 | orchestrator | Wednesday 14 May 2025 02:25:10 +0000 (0:00:02.014) 0:01:28.041 ********* 2025-05-14 02:26:06.539872 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:26:06.539882 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:26:06.539892 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:26:06.539902 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:26:06.539911 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:26:06.539921 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:26:06.539930 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:26:06.539940 | orchestrator | 2025-05-14 02:26:06.539949 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-14 02:26:06.539959 | orchestrator | Wednesday 14 May 2025 02:25:12 +0000 (0:00:02.071) 0:01:30.113 ********* 2025-05-14 02:26:06.539973 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.539983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.539993 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.540014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.540034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.540045 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540069 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.540080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:26:06.540101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540121 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540188 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:26:06.540224 | orchestrator | 2025-05-14 02:26:06.540234 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-14 02:26:06.540244 | orchestrator | Wednesday 14 May 2025 02:25:15 +0000 (0:00:03.508) 0:01:33.622 ********* 2025-05-14 02:26:06.540253 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:06.540267 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:06.540278 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:06.540287 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:06.540297 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:06.540306 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:06.540316 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:06.540325 | orchestrator | 2025-05-14 02:26:06.540335 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-14 02:26:06.540344 | orchestrator | Wednesday 14 May 2025 02:25:17 +0000 (0:00:01.569) 0:01:35.191 ********* 2025-05-14 02:26:06.540354 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:06.540363 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:06.540371 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:06.540378 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:06.540386 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:06.540394 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:06.540402 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:06.540409 | orchestrator | 2025-05-14 02:26:06.540417 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:26:06.540425 | orchestrator | Wednesday 14 May 2025 02:25:18 +0000 (0:00:01.487) 0:01:36.679 ********* 2025-05-14 02:26:06.540433 | orchestrator | 2025-05-14 02:26:06.540441 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:26:06.540449 | orchestrator | Wednesday 14 May 2025 02:25:18 +0000 (0:00:00.065) 0:01:36.744 ********* 2025-05-14 02:26:06.540456 | orchestrator | 2025-05-14 02:26:06.540464 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:26:06.540472 | orchestrator | Wednesday 14 May 2025 02:25:18 +0000 (0:00:00.070) 0:01:36.815 ********* 2025-05-14 02:26:06.540480 | orchestrator | 2025-05-14 02:26:06.540487 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:26:06.540495 | orchestrator | Wednesday 14 May 2025 02:25:18 +0000 (0:00:00.059) 0:01:36.874 ********* 2025-05-14 02:26:06.540503 | orchestrator | 2025-05-14 02:26:06.540514 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:26:06.540522 | orchestrator | Wednesday 14 May 2025 02:25:19 +0000 (0:00:00.301) 0:01:37.175 ********* 2025-05-14 02:26:06.540530 | orchestrator | 2025-05-14 02:26:06.540537 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:26:06.540545 | orchestrator | Wednesday 14 May 2025 02:25:19 +0000 (0:00:00.066) 0:01:37.241 ********* 2025-05-14 02:26:06.540553 | orchestrator | 2025-05-14 02:26:06.540561 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:26:06.540568 | orchestrator | Wednesday 14 May 2025 02:25:19 +0000 (0:00:00.062) 0:01:37.304 ********* 2025-05-14 02:26:06.540576 | orchestrator | 2025-05-14 02:26:06.540584 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-14 02:26:06.540592 | orchestrator | Wednesday 14 May 2025 02:25:19 +0000 (0:00:00.077) 0:01:37.382 ********* 2025-05-14 02:26:06.540604 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:06.540611 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:06.540619 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:06.540627 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:06.540635 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:06.540642 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:06.540650 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:06.540658 | orchestrator | 2025-05-14 02:26:06.540666 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-14 02:26:06.540674 | orchestrator | Wednesday 14 May 2025 02:25:27 +0000 (0:00:08.093) 0:01:45.475 ********* 2025-05-14 02:26:06.540682 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:06.540689 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:06.540697 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:06.540705 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:06.540712 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:06.540720 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:06.540728 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:06.540736 | orchestrator | 2025-05-14 02:26:06.540743 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-14 02:26:06.540751 | orchestrator | Wednesday 14 May 2025 02:25:52 +0000 (0:00:25.465) 0:02:10.940 ********* 2025-05-14 02:26:06.540759 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:06.540783 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:26:06.540791 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:06.540799 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:26:06.540807 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:26:06.540814 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:26:06.540822 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:26:06.540830 | orchestrator | 2025-05-14 02:26:06.540838 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-14 02:26:06.540846 | orchestrator | Wednesday 14 May 2025 02:25:55 +0000 (0:00:02.219) 0:02:13.160 ********* 2025-05-14 02:26:06.540853 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:06.540861 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:06.540869 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:06.540877 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:06.540885 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:06.540893 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:06.540900 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:06.540908 | orchestrator | 2025-05-14 02:26:06.540916 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:26:06.540924 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:26:06.540933 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:26:06.540941 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:26:06.540954 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:26:06.540962 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:26:06.540970 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:26:06.540978 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:26:06.540991 | orchestrator | 2025-05-14 02:26:06.540999 | orchestrator | 2025-05-14 02:26:06.541007 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:26:06.541015 | orchestrator | Wednesday 14 May 2025 02:26:04 +0000 (0:00:09.262) 0:02:22.422 ********* 2025-05-14 02:26:06.541022 | orchestrator | =============================================================================== 2025-05-14 02:26:06.541030 | orchestrator | common : Ensure fluentd image is present for label check --------------- 39.93s 2025-05-14 02:26:06.541038 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 25.47s 2025-05-14 02:26:06.541046 | orchestrator | common : Restart cron container ----------------------------------------- 9.26s 2025-05-14 02:26:06.541054 | orchestrator | common : Restart fluentd container -------------------------------------- 8.09s 2025-05-14 02:26:06.541062 | orchestrator | common : Copying over config.json files for services -------------------- 6.03s 2025-05-14 02:26:06.541069 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.28s 2025-05-14 02:26:06.541080 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.15s 2025-05-14 02:26:06.541088 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.54s 2025-05-14 02:26:06.541096 | orchestrator | common : Check common containers ---------------------------------------- 3.51s 2025-05-14 02:26:06.541104 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.30s 2025-05-14 02:26:06.541112 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.97s 2025-05-14 02:26:06.541120 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.53s 2025-05-14 02:26:06.541128 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.48s 2025-05-14 02:26:06.541136 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.22s 2025-05-14 02:26:06.541143 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.07s 2025-05-14 02:26:06.541151 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.03s 2025-05-14 02:26:06.541159 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.01s 2025-05-14 02:26:06.541167 | orchestrator | common : include_tasks -------------------------------------------------- 1.93s 2025-05-14 02:26:06.541175 | orchestrator | common : include_tasks -------------------------------------------------- 1.82s 2025-05-14 02:26:06.541182 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.59s 2025-05-14 02:26:06.541190 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:06.541198 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:06.541206 | orchestrator | 2025-05-14 02:26:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:09.622761 | orchestrator | 2025-05-14 02:26:09 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:09.623427 | orchestrator | 2025-05-14 02:26:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:09.624488 | orchestrator | 2025-05-14 02:26:09 | INFO  | Task ac6eebbf-8464-4b32-b20e-d2f9cdf44b2d is in state STARTED 2025-05-14 02:26:09.625921 | orchestrator | 2025-05-14 02:26:09 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:09.627500 | orchestrator | 2025-05-14 02:26:09 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:09.628331 | orchestrator | 2025-05-14 02:26:09 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:09.629075 | orchestrator | 2025-05-14 02:26:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:12.677227 | orchestrator | 2025-05-14 02:26:12 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:12.679845 | orchestrator | 2025-05-14 02:26:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:12.681265 | orchestrator | 2025-05-14 02:26:12 | INFO  | Task ac6eebbf-8464-4b32-b20e-d2f9cdf44b2d is in state STARTED 2025-05-14 02:26:12.682154 | orchestrator | 2025-05-14 02:26:12 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:12.683017 | orchestrator | 2025-05-14 02:26:12 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:12.684051 | orchestrator | 2025-05-14 02:26:12 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:12.684214 | orchestrator | 2025-05-14 02:26:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:15.726963 | orchestrator | 2025-05-14 02:26:15 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:15.727287 | orchestrator | 2025-05-14 02:26:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:15.728565 | orchestrator | 2025-05-14 02:26:15 | INFO  | Task ac6eebbf-8464-4b32-b20e-d2f9cdf44b2d is in state STARTED 2025-05-14 02:26:15.729461 | orchestrator | 2025-05-14 02:26:15 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:15.730739 | orchestrator | 2025-05-14 02:26:15 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:15.731593 | orchestrator | 2025-05-14 02:26:15 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:15.732065 | orchestrator | 2025-05-14 02:26:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:18.774341 | orchestrator | 2025-05-14 02:26:18 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:18.775499 | orchestrator | 2025-05-14 02:26:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:18.776947 | orchestrator | 2025-05-14 02:26:18 | INFO  | Task ac6eebbf-8464-4b32-b20e-d2f9cdf44b2d is in state STARTED 2025-05-14 02:26:18.778119 | orchestrator | 2025-05-14 02:26:18 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:18.779823 | orchestrator | 2025-05-14 02:26:18 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:18.781497 | orchestrator | 2025-05-14 02:26:18 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:18.781747 | orchestrator | 2025-05-14 02:26:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:21.829339 | orchestrator | 2025-05-14 02:26:21 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:21.832155 | orchestrator | 2025-05-14 02:26:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:21.839077 | orchestrator | 2025-05-14 02:26:21 | INFO  | Task ac6eebbf-8464-4b32-b20e-d2f9cdf44b2d is in state STARTED 2025-05-14 02:26:21.840942 | orchestrator | 2025-05-14 02:26:21 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:21.844618 | orchestrator | 2025-05-14 02:26:21 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:21.848605 | orchestrator | 2025-05-14 02:26:21 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:21.848647 | orchestrator | 2025-05-14 02:26:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:24.904169 | orchestrator | 2025-05-14 02:26:24 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:24.908232 | orchestrator | 2025-05-14 02:26:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:24.908306 | orchestrator | 2025-05-14 02:26:24 | INFO  | Task ac6eebbf-8464-4b32-b20e-d2f9cdf44b2d is in state STARTED 2025-05-14 02:26:24.908316 | orchestrator | 2025-05-14 02:26:24 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:24.910944 | orchestrator | 2025-05-14 02:26:24 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:24.911988 | orchestrator | 2025-05-14 02:26:24 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:24.912026 | orchestrator | 2025-05-14 02:26:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:27.980947 | orchestrator | 2025-05-14 02:26:27 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:27.986653 | orchestrator | 2025-05-14 02:26:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:27.993294 | orchestrator | 2025-05-14 02:26:27 | INFO  | Task ac6eebbf-8464-4b32-b20e-d2f9cdf44b2d is in state STARTED 2025-05-14 02:26:27.998691 | orchestrator | 2025-05-14 02:26:27 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:27.998862 | orchestrator | 2025-05-14 02:26:27 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:28.006381 | orchestrator | 2025-05-14 02:26:28 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:28.008169 | orchestrator | 2025-05-14 02:26:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:31.074280 | orchestrator | 2025-05-14 02:26:31 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:31.075583 | orchestrator | 2025-05-14 02:26:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:31.076949 | orchestrator | 2025-05-14 02:26:31 | INFO  | Task ac6eebbf-8464-4b32-b20e-d2f9cdf44b2d is in state STARTED 2025-05-14 02:26:31.078151 | orchestrator | 2025-05-14 02:26:31 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:31.079548 | orchestrator | 2025-05-14 02:26:31 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:31.080563 | orchestrator | 2025-05-14 02:26:31 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:31.080929 | orchestrator | 2025-05-14 02:26:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:34.139465 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:34.141539 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:34.142824 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:26:34.143343 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task ac6eebbf-8464-4b32-b20e-d2f9cdf44b2d is in state SUCCESS 2025-05-14 02:26:34.145055 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:34.145526 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:34.146450 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:34.149025 | orchestrator | 2025-05-14 02:26:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:37.185460 | orchestrator | 2025-05-14 02:26:37 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:37.185604 | orchestrator | 2025-05-14 02:26:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:37.186357 | orchestrator | 2025-05-14 02:26:37 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:26:37.187126 | orchestrator | 2025-05-14 02:26:37 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:37.188010 | orchestrator | 2025-05-14 02:26:37 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:37.188599 | orchestrator | 2025-05-14 02:26:37 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:37.188647 | orchestrator | 2025-05-14 02:26:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:40.226985 | orchestrator | 2025-05-14 02:26:40 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:40.227117 | orchestrator | 2025-05-14 02:26:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:40.228579 | orchestrator | 2025-05-14 02:26:40 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:26:40.229172 | orchestrator | 2025-05-14 02:26:40 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:40.229951 | orchestrator | 2025-05-14 02:26:40 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state STARTED 2025-05-14 02:26:40.232776 | orchestrator | 2025-05-14 02:26:40 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:40.232876 | orchestrator | 2025-05-14 02:26:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:43.284170 | orchestrator | 2025-05-14 02:26:43 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:43.284707 | orchestrator | 2025-05-14 02:26:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:43.285651 | orchestrator | 2025-05-14 02:26:43 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:26:43.289625 | orchestrator | 2025-05-14 02:26:43 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:43.290098 | orchestrator | 2025-05-14 02:26:43 | INFO  | Task 27a331e8-4522-4f80-9c4e-5e0450759a52 is in state SUCCESS 2025-05-14 02:26:43.290940 | orchestrator | 2025-05-14 02:26:43.290967 | orchestrator | 2025-05-14 02:26:43.290975 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:26:43.290983 | orchestrator | 2025-05-14 02:26:43.290989 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:26:43.290997 | orchestrator | Wednesday 14 May 2025 02:26:11 +0000 (0:00:00.722) 0:00:00.722 ********* 2025-05-14 02:26:43.291004 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:43.291012 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:26:43.291018 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:26:43.291025 | orchestrator | 2025-05-14 02:26:43.291031 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:26:43.291038 | orchestrator | Wednesday 14 May 2025 02:26:11 +0000 (0:00:00.619) 0:00:01.342 ********* 2025-05-14 02:26:43.291045 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-14 02:26:43.291052 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-14 02:26:43.291058 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-14 02:26:43.291065 | orchestrator | 2025-05-14 02:26:43.291071 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-14 02:26:43.291078 | orchestrator | 2025-05-14 02:26:43.291102 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-14 02:26:43.291109 | orchestrator | Wednesday 14 May 2025 02:26:12 +0000 (0:00:00.561) 0:00:01.903 ********* 2025-05-14 02:26:43.291115 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:26:43.291123 | orchestrator | 2025-05-14 02:26:43.291129 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-14 02:26:43.291148 | orchestrator | Wednesday 14 May 2025 02:26:13 +0000 (0:00:01.354) 0:00:03.258 ********* 2025-05-14 02:26:43.291155 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-14 02:26:43.291161 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-14 02:26:43.291167 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-14 02:26:43.291173 | orchestrator | 2025-05-14 02:26:43.291180 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-14 02:26:43.291186 | orchestrator | Wednesday 14 May 2025 02:26:15 +0000 (0:00:01.482) 0:00:04.740 ********* 2025-05-14 02:26:43.291192 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-14 02:26:43.291198 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-14 02:26:43.291205 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-14 02:26:43.291211 | orchestrator | 2025-05-14 02:26:43.291217 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-14 02:26:43.291223 | orchestrator | Wednesday 14 May 2025 02:26:17 +0000 (0:00:02.472) 0:00:07.213 ********* 2025-05-14 02:26:43.291230 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:43.291236 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:43.291242 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:43.291248 | orchestrator | 2025-05-14 02:26:43.291255 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-14 02:26:43.291261 | orchestrator | Wednesday 14 May 2025 02:26:21 +0000 (0:00:03.726) 0:00:10.939 ********* 2025-05-14 02:26:43.291267 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:43.291273 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:43.291279 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:43.291286 | orchestrator | 2025-05-14 02:26:43.291292 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:26:43.291299 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:43.291307 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:43.291313 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:43.291319 | orchestrator | 2025-05-14 02:26:43.291326 | orchestrator | 2025-05-14 02:26:43.291332 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:26:43.291338 | orchestrator | Wednesday 14 May 2025 02:26:30 +0000 (0:00:08.784) 0:00:19.724 ********* 2025-05-14 02:26:43.291345 | orchestrator | =============================================================================== 2025-05-14 02:26:43.291351 | orchestrator | memcached : Restart memcached container --------------------------------- 8.78s 2025-05-14 02:26:43.291357 | orchestrator | memcached : Check memcached container ----------------------------------- 3.73s 2025-05-14 02:26:43.291364 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.47s 2025-05-14 02:26:43.291370 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.48s 2025-05-14 02:26:43.291376 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.35s 2025-05-14 02:26:43.291382 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2025-05-14 02:26:43.291388 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-05-14 02:26:43.291400 | orchestrator | 2025-05-14 02:26:43.291407 | orchestrator | 2025-05-14 02:26:43.291413 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:26:43.291420 | orchestrator | 2025-05-14 02:26:43.291426 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:26:43.291432 | orchestrator | Wednesday 14 May 2025 02:26:09 +0000 (0:00:00.525) 0:00:00.525 ********* 2025-05-14 02:26:43.291438 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:43.291445 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:26:43.291451 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:26:43.291457 | orchestrator | 2025-05-14 02:26:43.291464 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:26:43.291480 | orchestrator | Wednesday 14 May 2025 02:26:10 +0000 (0:00:00.547) 0:00:01.073 ********* 2025-05-14 02:26:43.291487 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-14 02:26:43.291494 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-14 02:26:43.291500 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-14 02:26:43.291506 | orchestrator | 2025-05-14 02:26:43.291512 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-14 02:26:43.291518 | orchestrator | 2025-05-14 02:26:43.291524 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-14 02:26:43.291531 | orchestrator | Wednesday 14 May 2025 02:26:10 +0000 (0:00:00.622) 0:00:01.695 ********* 2025-05-14 02:26:43.291537 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:26:43.291544 | orchestrator | 2025-05-14 02:26:43.291550 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-14 02:26:43.291556 | orchestrator | Wednesday 14 May 2025 02:26:12 +0000 (0:00:01.338) 0:00:03.034 ********* 2025-05-14 02:26:43.291572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291637 | orchestrator | 2025-05-14 02:26:43.291645 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-14 02:26:43.291653 | orchestrator | Wednesday 14 May 2025 02:26:14 +0000 (0:00:02.106) 0:00:05.141 ********* 2025-05-14 02:26:43.291664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291721 | orchestrator | 2025-05-14 02:26:43.291728 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-14 02:26:43.291736 | orchestrator | Wednesday 14 May 2025 02:26:17 +0000 (0:00:03.348) 0:00:08.489 ********* 2025-05-14 02:26:43.291744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291824 | orchestrator | 2025-05-14 02:26:43.291832 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-14 02:26:43.291839 | orchestrator | Wednesday 14 May 2025 02:26:22 +0000 (0:00:04.561) 0:00:13.050 ********* 2025-05-14 02:26:43.291846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:26:43.291909 | orchestrator | 2025-05-14 02:26:43.291917 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-14 02:26:43.291924 | orchestrator | Wednesday 14 May 2025 02:26:25 +0000 (0:00:02.963) 0:00:16.014 ********* 2025-05-14 02:26:43.291932 | orchestrator | 2025-05-14 02:26:43.291939 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-14 02:26:43.291945 | orchestrator | Wednesday 14 May 2025 02:26:25 +0000 (0:00:00.179) 0:00:16.193 ********* 2025-05-14 02:26:43.291951 | orchestrator | 2025-05-14 02:26:43.291958 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-14 02:26:43.291964 | orchestrator | Wednesday 14 May 2025 02:26:25 +0000 (0:00:00.315) 0:00:16.508 ********* 2025-05-14 02:26:43.291970 | orchestrator | 2025-05-14 02:26:43.291977 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-14 02:26:43.291983 | orchestrator | Wednesday 14 May 2025 02:26:26 +0000 (0:00:00.343) 0:00:16.852 ********* 2025-05-14 02:26:43.291990 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:43.291996 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:43.292003 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:43.292009 | orchestrator | 2025-05-14 02:26:43.292015 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-14 02:26:43.292022 | orchestrator | Wednesday 14 May 2025 02:26:35 +0000 (0:00:09.428) 0:00:26.281 ********* 2025-05-14 02:26:43.292028 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:43.292035 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:43.292044 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:43.292051 | orchestrator | 2025-05-14 02:26:43.292057 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:26:43.292064 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:43.292075 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:43.292082 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:43.292088 | orchestrator | 2025-05-14 02:26:43.292094 | orchestrator | 2025-05-14 02:26:43.292101 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:26:43.292107 | orchestrator | Wednesday 14 May 2025 02:26:39 +0000 (0:00:04.466) 0:00:30.748 ********* 2025-05-14 02:26:43.292113 | orchestrator | =============================================================================== 2025-05-14 02:26:43.292119 | orchestrator | redis : Restart redis container ----------------------------------------- 9.43s 2025-05-14 02:26:43.292126 | orchestrator | redis : Copying over redis config files --------------------------------- 4.56s 2025-05-14 02:26:43.292132 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.47s 2025-05-14 02:26:43.292138 | orchestrator | redis : Copying over default config.json files -------------------------- 3.35s 2025-05-14 02:26:43.292144 | orchestrator | redis : Check redis containers ------------------------------------------ 2.96s 2025-05-14 02:26:43.292151 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.11s 2025-05-14 02:26:43.292157 | orchestrator | redis : include_tasks --------------------------------------------------- 1.34s 2025-05-14 02:26:43.292163 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.84s 2025-05-14 02:26:43.292170 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-05-14 02:26:43.292176 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.55s 2025-05-14 02:26:43.292202 | orchestrator | 2025-05-14 02:26:43 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:43.292210 | orchestrator | 2025-05-14 02:26:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:46.341284 | orchestrator | 2025-05-14 02:26:46 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:46.344603 | orchestrator | 2025-05-14 02:26:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:46.345657 | orchestrator | 2025-05-14 02:26:46 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:26:46.346199 | orchestrator | 2025-05-14 02:26:46 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:46.347024 | orchestrator | 2025-05-14 02:26:46 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:46.347056 | orchestrator | 2025-05-14 02:26:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:49.391688 | orchestrator | 2025-05-14 02:26:49 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:49.392265 | orchestrator | 2025-05-14 02:26:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:49.392939 | orchestrator | 2025-05-14 02:26:49 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:26:49.393526 | orchestrator | 2025-05-14 02:26:49 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:49.394432 | orchestrator | 2025-05-14 02:26:49 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:49.394470 | orchestrator | 2025-05-14 02:26:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:52.428916 | orchestrator | 2025-05-14 02:26:52 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:52.430722 | orchestrator | 2025-05-14 02:26:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:52.432780 | orchestrator | 2025-05-14 02:26:52 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:26:52.433607 | orchestrator | 2025-05-14 02:26:52 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:52.434283 | orchestrator | 2025-05-14 02:26:52 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:52.434331 | orchestrator | 2025-05-14 02:26:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:55.478870 | orchestrator | 2025-05-14 02:26:55 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:55.485283 | orchestrator | 2025-05-14 02:26:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:55.486281 | orchestrator | 2025-05-14 02:26:55 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:26:55.488233 | orchestrator | 2025-05-14 02:26:55 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:55.492319 | orchestrator | 2025-05-14 02:26:55 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:55.492388 | orchestrator | 2025-05-14 02:26:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:58.522271 | orchestrator | 2025-05-14 02:26:58 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:26:58.522419 | orchestrator | 2025-05-14 02:26:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:26:58.523098 | orchestrator | 2025-05-14 02:26:58 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:26:58.523637 | orchestrator | 2025-05-14 02:26:58 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:26:58.524211 | orchestrator | 2025-05-14 02:26:58 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:26:58.524293 | orchestrator | 2025-05-14 02:26:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:01.571401 | orchestrator | 2025-05-14 02:27:01 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:27:01.571750 | orchestrator | 2025-05-14 02:27:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:01.572396 | orchestrator | 2025-05-14 02:27:01 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:01.573238 | orchestrator | 2025-05-14 02:27:01 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:01.573723 | orchestrator | 2025-05-14 02:27:01 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:01.573737 | orchestrator | 2025-05-14 02:27:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:04.606279 | orchestrator | 2025-05-14 02:27:04 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:27:04.606459 | orchestrator | 2025-05-14 02:27:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:04.606948 | orchestrator | 2025-05-14 02:27:04 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:04.607760 | orchestrator | 2025-05-14 02:27:04 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:04.612467 | orchestrator | 2025-05-14 02:27:04 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:04.612566 | orchestrator | 2025-05-14 02:27:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:07.645572 | orchestrator | 2025-05-14 02:27:07 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:27:07.647638 | orchestrator | 2025-05-14 02:27:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:07.649933 | orchestrator | 2025-05-14 02:27:07 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:07.651954 | orchestrator | 2025-05-14 02:27:07 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:07.653594 | orchestrator | 2025-05-14 02:27:07 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:07.654006 | orchestrator | 2025-05-14 02:27:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:10.698366 | orchestrator | 2025-05-14 02:27:10 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:27:10.698505 | orchestrator | 2025-05-14 02:27:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:10.700174 | orchestrator | 2025-05-14 02:27:10 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:10.701606 | orchestrator | 2025-05-14 02:27:10 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:10.703036 | orchestrator | 2025-05-14 02:27:10 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:10.703084 | orchestrator | 2025-05-14 02:27:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:13.745391 | orchestrator | 2025-05-14 02:27:13 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:27:13.746207 | orchestrator | 2025-05-14 02:27:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:13.746419 | orchestrator | 2025-05-14 02:27:13 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:13.747725 | orchestrator | 2025-05-14 02:27:13 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:13.749169 | orchestrator | 2025-05-14 02:27:13 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:13.749207 | orchestrator | 2025-05-14 02:27:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:16.787402 | orchestrator | 2025-05-14 02:27:16 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:27:16.787574 | orchestrator | 2025-05-14 02:27:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:16.787724 | orchestrator | 2025-05-14 02:27:16 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:16.788587 | orchestrator | 2025-05-14 02:27:16 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:16.789319 | orchestrator | 2025-05-14 02:27:16 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:16.789358 | orchestrator | 2025-05-14 02:27:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:19.831417 | orchestrator | 2025-05-14 02:27:19 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:27:19.831600 | orchestrator | 2025-05-14 02:27:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:19.832429 | orchestrator | 2025-05-14 02:27:19 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:19.833161 | orchestrator | 2025-05-14 02:27:19 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:19.834268 | orchestrator | 2025-05-14 02:27:19 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:19.834302 | orchestrator | 2025-05-14 02:27:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:22.871962 | orchestrator | 2025-05-14 02:27:22 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:27:22.872218 | orchestrator | 2025-05-14 02:27:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:22.873303 | orchestrator | 2025-05-14 02:27:22 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:22.873951 | orchestrator | 2025-05-14 02:27:22 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:22.874940 | orchestrator | 2025-05-14 02:27:22 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:22.875009 | orchestrator | 2025-05-14 02:27:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:25.922291 | orchestrator | 2025-05-14 02:27:25 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:27:25.922945 | orchestrator | 2025-05-14 02:27:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:25.925090 | orchestrator | 2025-05-14 02:27:25 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:25.926966 | orchestrator | 2025-05-14 02:27:25 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:25.930788 | orchestrator | 2025-05-14 02:27:25 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:25.930908 | orchestrator | 2025-05-14 02:27:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:28.969257 | orchestrator | 2025-05-14 02:27:28 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state STARTED 2025-05-14 02:27:28.969338 | orchestrator | 2025-05-14 02:27:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:28.969411 | orchestrator | 2025-05-14 02:27:28 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:28.975301 | orchestrator | 2025-05-14 02:27:28 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:28.975723 | orchestrator | 2025-05-14 02:27:28 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:28.978004 | orchestrator | 2025-05-14 02:27:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:32.025432 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task f6d22afa-506a-4a6a-8e04-1de1cd86ce9e is in state SUCCESS 2025-05-14 02:27:32.028659 | orchestrator | 2025-05-14 02:27:32.028756 | orchestrator | 2025-05-14 02:27:32.028784 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:27:32.028873 | orchestrator | 2025-05-14 02:27:32.028899 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:27:32.028919 | orchestrator | Wednesday 14 May 2025 02:26:10 +0000 (0:00:00.453) 0:00:00.453 ********* 2025-05-14 02:27:32.028933 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:27:32.028946 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:27:32.028957 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:27:32.028968 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:27:32.028979 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:27:32.028990 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:27:32.029001 | orchestrator | 2025-05-14 02:27:32.029012 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:27:32.029050 | orchestrator | Wednesday 14 May 2025 02:26:11 +0000 (0:00:01.412) 0:00:01.865 ********* 2025-05-14 02:27:32.029062 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:27:32.029073 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:27:32.029086 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:27:32.029097 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:27:32.029110 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:27:32.029129 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:27:32.029148 | orchestrator | 2025-05-14 02:27:32.029168 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-14 02:27:32.029187 | orchestrator | 2025-05-14 02:27:32.029206 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-14 02:27:32.029224 | orchestrator | Wednesday 14 May 2025 02:26:13 +0000 (0:00:02.276) 0:00:04.141 ********* 2025-05-14 02:27:32.029242 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:27:32.029261 | orchestrator | 2025-05-14 02:27:32.029281 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-14 02:27:32.029301 | orchestrator | Wednesday 14 May 2025 02:26:16 +0000 (0:00:02.651) 0:00:06.792 ********* 2025-05-14 02:27:32.029320 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-14 02:27:32.029337 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-14 02:27:32.029351 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-14 02:27:32.029364 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-14 02:27:32.029377 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-14 02:27:32.029411 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-14 02:27:32.029430 | orchestrator | 2025-05-14 02:27:32.029442 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-14 02:27:32.029453 | orchestrator | Wednesday 14 May 2025 02:26:18 +0000 (0:00:01.589) 0:00:08.382 ********* 2025-05-14 02:27:32.029464 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-14 02:27:32.029474 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-14 02:27:32.029485 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-14 02:27:32.029496 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-14 02:27:32.029507 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-14 02:27:32.029518 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-14 02:27:32.029529 | orchestrator | 2025-05-14 02:27:32.029539 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-14 02:27:32.029550 | orchestrator | Wednesday 14 May 2025 02:26:21 +0000 (0:00:02.974) 0:00:11.356 ********* 2025-05-14 02:27:32.029564 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-14 02:27:32.029585 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:27:32.029606 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-14 02:27:32.029627 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:27:32.029647 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-14 02:27:32.029665 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:27:32.029682 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-14 02:27:32.029693 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:32.029704 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-14 02:27:32.029715 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:32.029726 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-14 02:27:32.029748 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:32.029760 | orchestrator | 2025-05-14 02:27:32.029771 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-14 02:27:32.029782 | orchestrator | Wednesday 14 May 2025 02:26:23 +0000 (0:00:02.442) 0:00:13.798 ********* 2025-05-14 02:27:32.029793 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:27:32.029832 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:27:32.029846 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:27:32.029857 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:32.029868 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:32.029878 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:32.029889 | orchestrator | 2025-05-14 02:27:32.029901 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-14 02:27:32.029911 | orchestrator | Wednesday 14 May 2025 02:26:24 +0000 (0:00:00.873) 0:00:14.672 ********* 2025-05-14 02:27:32.029977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030217 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030240 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030279 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030400 | orchestrator | 2025-05-14 02:27:32.030419 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-14 02:27:32.030438 | orchestrator | Wednesday 14 May 2025 02:26:27 +0000 (0:00:03.298) 0:00:17.971 ********* 2025-05-14 02:27:32.030459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030594 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030614 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.030736 | orchestrator | 2025-05-14 02:27:32.030747 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-14 02:27:32.030759 | orchestrator | Wednesday 14 May 2025 02:26:31 +0000 (0:00:04.051) 0:00:22.022 ********* 2025-05-14 02:27:32.030770 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:32.030781 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:32.030792 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:32.030865 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:32.030881 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:32.030899 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:32.030917 | orchestrator | 2025-05-14 02:27:32.030936 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-14 02:27:32.030954 | orchestrator | Wednesday 14 May 2025 02:26:35 +0000 (0:00:03.678) 0:00:25.700 ********* 2025-05-14 02:27:32.030972 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:32.030991 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:32.031010 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:32.031029 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:32.031048 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:32.031066 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:32.031084 | orchestrator | 2025-05-14 02:27:32.031115 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-14 02:27:32.031135 | orchestrator | Wednesday 14 May 2025 02:26:38 +0000 (0:00:02.915) 0:00:28.616 ********* 2025-05-14 02:27:32.031155 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:27:32.031173 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:27:32.031192 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:27:32.031210 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:32.031229 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:32.031247 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:32.031266 | orchestrator | 2025-05-14 02:27:32.031284 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-14 02:27:32.031303 | orchestrator | Wednesday 14 May 2025 02:26:39 +0000 (0:00:01.320) 0:00:29.937 ********* 2025-05-14 02:27:32.031323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:27:32.031636 | orchestrator | 2025-05-14 02:27:32.031655 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:27:32.031674 | orchestrator | Wednesday 14 May 2025 02:26:43 +0000 (0:00:03.425) 0:00:33.362 ********* 2025-05-14 02:27:32.031714 | orchestrator | 2025-05-14 02:27:32.031734 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:27:32.031752 | orchestrator | Wednesday 14 May 2025 02:26:43 +0000 (0:00:00.163) 0:00:33.525 ********* 2025-05-14 02:27:32.031770 | orchestrator | 2025-05-14 02:27:32.031787 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:27:32.031864 | orchestrator | Wednesday 14 May 2025 02:26:43 +0000 (0:00:00.256) 0:00:33.782 ********* 2025-05-14 02:27:32.031888 | orchestrator | 2025-05-14 02:27:32.031908 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:27:32.031926 | orchestrator | Wednesday 14 May 2025 02:26:43 +0000 (0:00:00.131) 0:00:33.914 ********* 2025-05-14 02:27:32.031944 | orchestrator | 2025-05-14 02:27:32.031962 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:27:32.031982 | orchestrator | Wednesday 14 May 2025 02:26:44 +0000 (0:00:00.401) 0:00:34.315 ********* 2025-05-14 02:27:32.032001 | orchestrator | 2025-05-14 02:27:32.032020 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:27:32.032038 | orchestrator | Wednesday 14 May 2025 02:26:44 +0000 (0:00:00.246) 0:00:34.562 ********* 2025-05-14 02:27:32.032057 | orchestrator | 2025-05-14 02:27:32.032074 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-14 02:27:32.032095 | orchestrator | Wednesday 14 May 2025 02:26:44 +0000 (0:00:00.366) 0:00:34.928 ********* 2025-05-14 02:27:32.032115 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:32.032134 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:32.032152 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:32.032170 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:32.032188 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:32.032207 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:32.032225 | orchestrator | 2025-05-14 02:27:32.032244 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-14 02:27:32.032271 | orchestrator | Wednesday 14 May 2025 02:26:56 +0000 (0:00:12.147) 0:00:47.076 ********* 2025-05-14 02:27:32.032302 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:27:32.032336 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:27:32.032355 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:27:32.032373 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:27:32.032391 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:27:32.032409 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:27:32.032428 | orchestrator | 2025-05-14 02:27:32.032447 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-14 02:27:32.032466 | orchestrator | Wednesday 14 May 2025 02:26:58 +0000 (0:00:01.557) 0:00:48.633 ********* 2025-05-14 02:27:32.032485 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:32.032503 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:32.032521 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:32.032540 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:32.032558 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:32.032578 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:32.032598 | orchestrator | 2025-05-14 02:27:32.032617 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-14 02:27:32.032637 | orchestrator | Wednesday 14 May 2025 02:27:08 +0000 (0:00:10.027) 0:00:58.661 ********* 2025-05-14 02:27:32.032655 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-14 02:27:32.032674 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-14 02:27:32.032692 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-14 02:27:32.032711 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-14 02:27:32.032731 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-14 02:27:32.032749 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-14 02:27:32.032768 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-14 02:27:32.032786 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-14 02:27:32.032872 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-14 02:27:32.032895 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-14 02:27:32.032913 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-14 02:27:32.032933 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-14 02:27:32.032952 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:27:32.032970 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:27:32.032988 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:27:32.033007 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:27:32.033025 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:27:32.033044 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:27:32.033063 | orchestrator | 2025-05-14 02:27:32.033082 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-14 02:27:32.033101 | orchestrator | Wednesday 14 May 2025 02:27:16 +0000 (0:00:08.161) 0:01:06.823 ********* 2025-05-14 02:27:32.033135 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-14 02:27:32.033155 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:32.033172 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-14 02:27:32.033191 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:32.033209 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-14 02:27:32.033228 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:32.033247 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-14 02:27:32.033265 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-14 02:27:32.033284 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-14 02:27:32.033302 | orchestrator | 2025-05-14 02:27:32.033320 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-14 02:27:32.033337 | orchestrator | Wednesday 14 May 2025 02:27:19 +0000 (0:00:02.418) 0:01:09.241 ********* 2025-05-14 02:27:32.033353 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-14 02:27:32.033369 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:32.033387 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-14 02:27:32.033405 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:32.033423 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-14 02:27:32.033440 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:32.033467 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-14 02:27:32.033495 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-14 02:27:32.033513 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-14 02:27:32.033531 | orchestrator | 2025-05-14 02:27:32.033548 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-14 02:27:32.033563 | orchestrator | Wednesday 14 May 2025 02:27:23 +0000 (0:00:03.939) 0:01:13.180 ********* 2025-05-14 02:27:32.033580 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:32.033597 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:32.033613 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:32.033631 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:32.033648 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:32.033666 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:32.033682 | orchestrator | 2025-05-14 02:27:32.033697 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:27:32.033713 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:27:32.033724 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:27:32.033734 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:27:32.033744 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:27:32.033753 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:27:32.033763 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:27:32.033773 | orchestrator | 2025-05-14 02:27:32.033783 | orchestrator | 2025-05-14 02:27:32.033792 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:27:32.033802 | orchestrator | Wednesday 14 May 2025 02:27:30 +0000 (0:00:07.846) 0:01:21.027 ********* 2025-05-14 02:27:32.033847 | orchestrator | =============================================================================== 2025-05-14 02:27:32.033868 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.87s 2025-05-14 02:27:32.033878 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.15s 2025-05-14 02:27:32.033887 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.16s 2025-05-14 02:27:32.033897 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.04s 2025-05-14 02:27:32.033911 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.94s 2025-05-14 02:27:32.033928 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.68s 2025-05-14 02:27:32.033943 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.43s 2025-05-14 02:27:32.033959 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.31s 2025-05-14 02:27:32.033974 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.97s 2025-05-14 02:27:32.033990 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.92s 2025-05-14 02:27:32.034005 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.65s 2025-05-14 02:27:32.034067 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.44s 2025-05-14 02:27:32.034078 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.42s 2025-05-14 02:27:32.034088 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.28s 2025-05-14 02:27:32.034098 | orchestrator | module-load : Load modules ---------------------------------------------- 1.59s 2025-05-14 02:27:32.034107 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.57s 2025-05-14 02:27:32.034117 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.56s 2025-05-14 02:27:32.034126 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.41s 2025-05-14 02:27:32.034136 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.32s 2025-05-14 02:27:32.034146 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.87s 2025-05-14 02:27:32.034848 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:32.034965 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:32.034980 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:32.034991 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:32.035003 | orchestrator | 2025-05-14 02:27:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:35.091638 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:35.097697 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:35.099778 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:35.100723 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:27:35.101503 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:35.101530 | orchestrator | 2025-05-14 02:27:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:38.147202 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:38.151159 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:38.151592 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:38.152442 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:27:38.153383 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:38.153414 | orchestrator | 2025-05-14 02:27:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:41.200490 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:41.200622 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:41.200635 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:41.201221 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:27:41.202224 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:41.202312 | orchestrator | 2025-05-14 02:27:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:44.239934 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:44.240052 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:44.240062 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:44.240070 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:27:44.240343 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:44.240367 | orchestrator | 2025-05-14 02:27:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:47.277113 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:47.277429 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:47.278314 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:47.279237 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:27:47.279972 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:47.279983 | orchestrator | 2025-05-14 02:27:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:50.327722 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:50.328615 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:50.330161 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:50.331396 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:27:50.332694 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:50.332735 | orchestrator | 2025-05-14 02:27:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:53.378216 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:53.378321 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:53.379630 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:53.379926 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:27:53.380683 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:53.380761 | orchestrator | 2025-05-14 02:27:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:56.413046 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:56.413133 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:56.413479 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:56.414585 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:27:56.414966 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:56.415002 | orchestrator | 2025-05-14 02:27:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:59.461073 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:27:59.461907 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:27:59.463155 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:27:59.464922 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:27:59.466168 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:27:59.466213 | orchestrator | 2025-05-14 02:27:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:02.502937 | orchestrator | 2025-05-14 02:28:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:02.503204 | orchestrator | 2025-05-14 02:28:02 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:02.504298 | orchestrator | 2025-05-14 02:28:02 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:02.505737 | orchestrator | 2025-05-14 02:28:02 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:02.506558 | orchestrator | 2025-05-14 02:28:02 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:02.506590 | orchestrator | 2025-05-14 02:28:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:05.549320 | orchestrator | 2025-05-14 02:28:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:05.550052 | orchestrator | 2025-05-14 02:28:05 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:05.550634 | orchestrator | 2025-05-14 02:28:05 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:05.551445 | orchestrator | 2025-05-14 02:28:05 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:05.552368 | orchestrator | 2025-05-14 02:28:05 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:05.552411 | orchestrator | 2025-05-14 02:28:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:08.596290 | orchestrator | 2025-05-14 02:28:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:08.598147 | orchestrator | 2025-05-14 02:28:08 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:08.599575 | orchestrator | 2025-05-14 02:28:08 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:08.601452 | orchestrator | 2025-05-14 02:28:08 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:08.603144 | orchestrator | 2025-05-14 02:28:08 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:08.603610 | orchestrator | 2025-05-14 02:28:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:11.663798 | orchestrator | 2025-05-14 02:28:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:11.664373 | orchestrator | 2025-05-14 02:28:11 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:11.665982 | orchestrator | 2025-05-14 02:28:11 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:11.668561 | orchestrator | 2025-05-14 02:28:11 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:11.670312 | orchestrator | 2025-05-14 02:28:11 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:11.670735 | orchestrator | 2025-05-14 02:28:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:14.725220 | orchestrator | 2025-05-14 02:28:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:14.725332 | orchestrator | 2025-05-14 02:28:14 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:14.725348 | orchestrator | 2025-05-14 02:28:14 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:14.726242 | orchestrator | 2025-05-14 02:28:14 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:14.729432 | orchestrator | 2025-05-14 02:28:14 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:14.729477 | orchestrator | 2025-05-14 02:28:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:17.763798 | orchestrator | 2025-05-14 02:28:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:17.765394 | orchestrator | 2025-05-14 02:28:17 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:17.765745 | orchestrator | 2025-05-14 02:28:17 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:17.766308 | orchestrator | 2025-05-14 02:28:17 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:17.766931 | orchestrator | 2025-05-14 02:28:17 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:17.766958 | orchestrator | 2025-05-14 02:28:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:20.823739 | orchestrator | 2025-05-14 02:28:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:20.825556 | orchestrator | 2025-05-14 02:28:20 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:20.826052 | orchestrator | 2025-05-14 02:28:20 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:20.828536 | orchestrator | 2025-05-14 02:28:20 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:20.829256 | orchestrator | 2025-05-14 02:28:20 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:20.829333 | orchestrator | 2025-05-14 02:28:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:23.879408 | orchestrator | 2025-05-14 02:28:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:23.881425 | orchestrator | 2025-05-14 02:28:23 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:23.884477 | orchestrator | 2025-05-14 02:28:23 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:23.887386 | orchestrator | 2025-05-14 02:28:23 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:23.890255 | orchestrator | 2025-05-14 02:28:23 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:23.890323 | orchestrator | 2025-05-14 02:28:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:26.948433 | orchestrator | 2025-05-14 02:28:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:26.950219 | orchestrator | 2025-05-14 02:28:26 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:26.950270 | orchestrator | 2025-05-14 02:28:26 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:26.951684 | orchestrator | 2025-05-14 02:28:26 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:26.953038 | orchestrator | 2025-05-14 02:28:26 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:26.953072 | orchestrator | 2025-05-14 02:28:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:30.011573 | orchestrator | 2025-05-14 02:28:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:30.011665 | orchestrator | 2025-05-14 02:28:30 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:30.011899 | orchestrator | 2025-05-14 02:28:30 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:30.012857 | orchestrator | 2025-05-14 02:28:30 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:30.013658 | orchestrator | 2025-05-14 02:28:30 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:30.013760 | orchestrator | 2025-05-14 02:28:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:33.064814 | orchestrator | 2025-05-14 02:28:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:33.065901 | orchestrator | 2025-05-14 02:28:33 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:33.067362 | orchestrator | 2025-05-14 02:28:33 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:33.068569 | orchestrator | 2025-05-14 02:28:33 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:33.069667 | orchestrator | 2025-05-14 02:28:33 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:33.069778 | orchestrator | 2025-05-14 02:28:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:36.124979 | orchestrator | 2025-05-14 02:28:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:36.125088 | orchestrator | 2025-05-14 02:28:36 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:36.125130 | orchestrator | 2025-05-14 02:28:36 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:36.126852 | orchestrator | 2025-05-14 02:28:36 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:36.128465 | orchestrator | 2025-05-14 02:28:36 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:36.128487 | orchestrator | 2025-05-14 02:28:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:39.163347 | orchestrator | 2025-05-14 02:28:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:39.163556 | orchestrator | 2025-05-14 02:28:39 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:39.164231 | orchestrator | 2025-05-14 02:28:39 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:39.164942 | orchestrator | 2025-05-14 02:28:39 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:39.167695 | orchestrator | 2025-05-14 02:28:39 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:39.167725 | orchestrator | 2025-05-14 02:28:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:42.187265 | orchestrator | 2025-05-14 02:28:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:42.187369 | orchestrator | 2025-05-14 02:28:42 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:42.189853 | orchestrator | 2025-05-14 02:28:42 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:42.193558 | orchestrator | 2025-05-14 02:28:42 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:42.193907 | orchestrator | 2025-05-14 02:28:42 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:42.193933 | orchestrator | 2025-05-14 02:28:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:45.233290 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:45.234220 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:45.235031 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:45.236749 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:45.237462 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:45.237625 | orchestrator | 2025-05-14 02:28:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:48.264022 | orchestrator | 2025-05-14 02:28:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:48.264646 | orchestrator | 2025-05-14 02:28:48 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:48.265560 | orchestrator | 2025-05-14 02:28:48 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:48.266359 | orchestrator | 2025-05-14 02:28:48 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:48.267295 | orchestrator | 2025-05-14 02:28:48 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:48.267367 | orchestrator | 2025-05-14 02:28:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:51.306445 | orchestrator | 2025-05-14 02:28:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:51.306665 | orchestrator | 2025-05-14 02:28:51 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:51.307654 | orchestrator | 2025-05-14 02:28:51 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:51.312012 | orchestrator | 2025-05-14 02:28:51 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:51.314136 | orchestrator | 2025-05-14 02:28:51 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:51.314175 | orchestrator | 2025-05-14 02:28:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:54.361696 | orchestrator | 2025-05-14 02:28:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:54.363926 | orchestrator | 2025-05-14 02:28:54 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state STARTED 2025-05-14 02:28:54.367048 | orchestrator | 2025-05-14 02:28:54 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:54.374506 | orchestrator | 2025-05-14 02:28:54 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:54.375121 | orchestrator | 2025-05-14 02:28:54 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:54.375151 | orchestrator | 2025-05-14 02:28:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:57.414941 | orchestrator | 2025-05-14 02:28:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:28:57.415079 | orchestrator | 2025-05-14 02:28:57 | INFO  | Task c482b33d-fc36-4d07-b0ef-fde13e09b234 is in state SUCCESS 2025-05-14 02:28:57.415887 | orchestrator | 2025-05-14 02:28:57.415921 | orchestrator | 2025-05-14 02:28:57.415928 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-14 02:28:57.415935 | orchestrator | 2025-05-14 02:28:57.415942 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-14 02:28:57.415948 | orchestrator | Wednesday 14 May 2025 02:26:37 +0000 (0:00:00.129) 0:00:00.129 ********* 2025-05-14 02:28:57.415955 | orchestrator | ok: [localhost] => { 2025-05-14 02:28:57.415964 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-14 02:28:57.415971 | orchestrator | } 2025-05-14 02:28:57.415977 | orchestrator | 2025-05-14 02:28:57.415984 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-14 02:28:57.415991 | orchestrator | Wednesday 14 May 2025 02:26:37 +0000 (0:00:00.044) 0:00:00.174 ********* 2025-05-14 02:28:57.415998 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-14 02:28:57.416006 | orchestrator | ...ignoring 2025-05-14 02:28:57.416012 | orchestrator | 2025-05-14 02:28:57.416018 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-14 02:28:57.416023 | orchestrator | Wednesday 14 May 2025 02:26:40 +0000 (0:00:02.677) 0:00:02.851 ********* 2025-05-14 02:28:57.416029 | orchestrator | skipping: [localhost] 2025-05-14 02:28:57.416035 | orchestrator | 2025-05-14 02:28:57.416041 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-14 02:28:57.416047 | orchestrator | Wednesday 14 May 2025 02:26:40 +0000 (0:00:00.131) 0:00:02.983 ********* 2025-05-14 02:28:57.416053 | orchestrator | ok: [localhost] 2025-05-14 02:28:57.416059 | orchestrator | 2025-05-14 02:28:57.416065 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:28:57.416072 | orchestrator | 2025-05-14 02:28:57.416078 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:28:57.416105 | orchestrator | Wednesday 14 May 2025 02:26:41 +0000 (0:00:00.510) 0:00:03.494 ********* 2025-05-14 02:28:57.416112 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:28:57.416119 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:28:57.416125 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:28:57.416132 | orchestrator | 2025-05-14 02:28:57.416183 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:28:57.416191 | orchestrator | Wednesday 14 May 2025 02:26:41 +0000 (0:00:00.797) 0:00:04.292 ********* 2025-05-14 02:28:57.416198 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-14 02:28:57.416205 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-14 02:28:57.416212 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-14 02:28:57.416218 | orchestrator | 2025-05-14 02:28:57.416225 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-14 02:28:57.416231 | orchestrator | 2025-05-14 02:28:57.416238 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-14 02:28:57.416245 | orchestrator | Wednesday 14 May 2025 02:26:43 +0000 (0:00:01.126) 0:00:05.418 ********* 2025-05-14 02:28:57.416252 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:28:57.416259 | orchestrator | 2025-05-14 02:28:57.416265 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-14 02:28:57.416272 | orchestrator | Wednesday 14 May 2025 02:26:44 +0000 (0:00:00.974) 0:00:06.393 ********* 2025-05-14 02:28:57.416279 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:28:57.416285 | orchestrator | 2025-05-14 02:28:57.416292 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-14 02:28:57.416299 | orchestrator | Wednesday 14 May 2025 02:26:45 +0000 (0:00:01.340) 0:00:07.734 ********* 2025-05-14 02:28:57.416306 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:57.416312 | orchestrator | 2025-05-14 02:28:57.416319 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-14 02:28:57.416326 | orchestrator | Wednesday 14 May 2025 02:26:46 +0000 (0:00:00.787) 0:00:08.521 ********* 2025-05-14 02:28:57.416332 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:57.416339 | orchestrator | 2025-05-14 02:28:57.416345 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-14 02:28:57.416352 | orchestrator | Wednesday 14 May 2025 02:26:47 +0000 (0:00:01.017) 0:00:09.538 ********* 2025-05-14 02:28:57.416358 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:57.416365 | orchestrator | 2025-05-14 02:28:57.416371 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-14 02:28:57.416378 | orchestrator | Wednesday 14 May 2025 02:26:47 +0000 (0:00:00.382) 0:00:09.921 ********* 2025-05-14 02:28:57.416384 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:57.416391 | orchestrator | 2025-05-14 02:28:57.416397 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-14 02:28:57.416404 | orchestrator | Wednesday 14 May 2025 02:26:48 +0000 (0:00:00.412) 0:00:10.334 ********* 2025-05-14 02:28:57.416411 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:28:57.416417 | orchestrator | 2025-05-14 02:28:57.416424 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-14 02:28:57.416430 | orchestrator | Wednesday 14 May 2025 02:26:49 +0000 (0:00:01.144) 0:00:11.479 ********* 2025-05-14 02:28:57.416437 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:28:57.416443 | orchestrator | 2025-05-14 02:28:57.416450 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-14 02:28:57.416457 | orchestrator | Wednesday 14 May 2025 02:26:50 +0000 (0:00:01.119) 0:00:12.598 ********* 2025-05-14 02:28:57.416463 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:57.416470 | orchestrator | 2025-05-14 02:28:57.416476 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-14 02:28:57.416488 | orchestrator | Wednesday 14 May 2025 02:26:50 +0000 (0:00:00.432) 0:00:13.031 ********* 2025-05-14 02:28:57.416495 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:57.416504 | orchestrator | 2025-05-14 02:28:57.416521 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-14 02:28:57.416528 | orchestrator | Wednesday 14 May 2025 02:26:51 +0000 (0:00:00.356) 0:00:13.388 ********* 2025-05-14 02:28:57.416542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:28:57.416557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:28:57.416567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:28:57.416575 | orchestrator | 2025-05-14 02:28:57.416583 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-14 02:28:57.416591 | orchestrator | Wednesday 14 May 2025 02:26:52 +0000 (0:00:01.071) 0:00:14.459 ********* 2025-05-14 02:28:57.416607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:28:57.416625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:28:57.416636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:28:57.416645 | orchestrator | 2025-05-14 02:28:57.416651 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-14 02:28:57.416658 | orchestrator | Wednesday 14 May 2025 02:26:54 +0000 (0:00:02.262) 0:00:16.721 ********* 2025-05-14 02:28:57.416666 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-14 02:28:57.416673 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-14 02:28:57.416680 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-14 02:28:57.416686 | orchestrator | 2025-05-14 02:28:57.416693 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-14 02:28:57.416700 | orchestrator | Wednesday 14 May 2025 02:26:56 +0000 (0:00:01.669) 0:00:18.391 ********* 2025-05-14 02:28:57.416712 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-14 02:28:57.416719 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-14 02:28:57.416726 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-14 02:28:57.416733 | orchestrator | 2025-05-14 02:28:57.416740 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-14 02:28:57.416746 | orchestrator | Wednesday 14 May 2025 02:26:58 +0000 (0:00:02.395) 0:00:20.786 ********* 2025-05-14 02:28:57.416753 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-14 02:28:57.416760 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-14 02:28:57.416765 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-14 02:28:57.416772 | orchestrator | 2025-05-14 02:28:57.416782 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-14 02:28:57.416789 | orchestrator | Wednesday 14 May 2025 02:27:01 +0000 (0:00:03.302) 0:00:24.088 ********* 2025-05-14 02:28:57.416796 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-14 02:28:57.416803 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-14 02:28:57.416831 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-14 02:28:57.416838 | orchestrator | 2025-05-14 02:28:57.416845 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-14 02:28:57.416851 | orchestrator | Wednesday 14 May 2025 02:27:03 +0000 (0:00:01.867) 0:00:25.955 ********* 2025-05-14 02:28:57.416858 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-14 02:28:57.416864 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-14 02:28:57.416870 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-14 02:28:57.416875 | orchestrator | 2025-05-14 02:28:57.416881 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-14 02:28:57.416888 | orchestrator | Wednesday 14 May 2025 02:27:05 +0000 (0:00:01.461) 0:00:27.417 ********* 2025-05-14 02:28:57.416894 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-14 02:28:57.416901 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-14 02:28:57.416907 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-14 02:28:57.416913 | orchestrator | 2025-05-14 02:28:57.416920 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-14 02:28:57.416926 | orchestrator | Wednesday 14 May 2025 02:27:06 +0000 (0:00:01.398) 0:00:28.816 ********* 2025-05-14 02:28:57.416933 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:57.416940 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:28:57.416947 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:28:57.416953 | orchestrator | 2025-05-14 02:28:57.416960 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-14 02:28:57.416966 | orchestrator | Wednesday 14 May 2025 02:27:07 +0000 (0:00:00.657) 0:00:29.473 ********* 2025-05-14 02:28:57.416974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:28:57.416988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:28:57.417001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:28:57.417010 | orchestrator | 2025-05-14 02:28:57.417478 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-14 02:28:57.417504 | orchestrator | Wednesday 14 May 2025 02:27:08 +0000 (0:00:01.620) 0:00:31.093 ********* 2025-05-14 02:28:57.417513 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:28:57.417520 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:28:57.417527 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:28:57.417533 | orchestrator | 2025-05-14 02:28:57.417540 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-14 02:28:57.417547 | orchestrator | Wednesday 14 May 2025 02:27:09 +0000 (0:00:01.139) 0:00:32.233 ********* 2025-05-14 02:28:57.417554 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:28:57.417560 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:28:57.417567 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:28:57.417573 | orchestrator | 2025-05-14 02:28:57.417580 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-14 02:28:57.417586 | orchestrator | Wednesday 14 May 2025 02:27:16 +0000 (0:00:06.570) 0:00:38.804 ********* 2025-05-14 02:28:57.417593 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:28:57.417607 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:28:57.417615 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:28:57.417621 | orchestrator | 2025-05-14 02:28:57.417628 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-14 02:28:57.417635 | orchestrator | 2025-05-14 02:28:57.417641 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-14 02:28:57.417648 | orchestrator | Wednesday 14 May 2025 02:27:16 +0000 (0:00:00.341) 0:00:39.145 ********* 2025-05-14 02:28:57.417654 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:28:57.417661 | orchestrator | 2025-05-14 02:28:57.417668 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-14 02:28:57.417675 | orchestrator | Wednesday 14 May 2025 02:27:17 +0000 (0:00:00.742) 0:00:39.887 ********* 2025-05-14 02:28:57.417681 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:57.417687 | orchestrator | 2025-05-14 02:28:57.417694 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-14 02:28:57.417701 | orchestrator | Wednesday 14 May 2025 02:27:18 +0000 (0:00:00.631) 0:00:40.518 ********* 2025-05-14 02:28:57.417707 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:28:57.417713 | orchestrator | 2025-05-14 02:28:57.417720 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-14 02:28:57.417726 | orchestrator | Wednesday 14 May 2025 02:27:20 +0000 (0:00:01.837) 0:00:42.355 ********* 2025-05-14 02:28:57.417733 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:28:57.417740 | orchestrator | 2025-05-14 02:28:57.417746 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-14 02:28:57.417753 | orchestrator | 2025-05-14 02:28:57.417759 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-14 02:28:57.417770 | orchestrator | Wednesday 14 May 2025 02:28:14 +0000 (0:00:54.834) 0:01:37.190 ********* 2025-05-14 02:28:57.417777 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:28:57.417784 | orchestrator | 2025-05-14 02:28:57.417790 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-14 02:28:57.417797 | orchestrator | Wednesday 14 May 2025 02:28:15 +0000 (0:00:00.890) 0:01:38.081 ********* 2025-05-14 02:28:57.417804 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:28:57.417839 | orchestrator | 2025-05-14 02:28:57.417845 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-14 02:28:57.417851 | orchestrator | Wednesday 14 May 2025 02:28:16 +0000 (0:00:00.343) 0:01:38.425 ********* 2025-05-14 02:28:57.417857 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:28:57.417863 | orchestrator | 2025-05-14 02:28:57.417869 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-14 02:28:57.417874 | orchestrator | Wednesday 14 May 2025 02:28:17 +0000 (0:00:01.879) 0:01:40.304 ********* 2025-05-14 02:28:57.417880 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:28:57.417887 | orchestrator | 2025-05-14 02:28:57.417893 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-14 02:28:57.417899 | orchestrator | 2025-05-14 02:28:57.417905 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-14 02:28:57.417912 | orchestrator | Wednesday 14 May 2025 02:28:34 +0000 (0:00:16.145) 0:01:56.450 ********* 2025-05-14 02:28:57.417919 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:28:57.417925 | orchestrator | 2025-05-14 02:28:57.417931 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-14 02:28:57.417938 | orchestrator | Wednesday 14 May 2025 02:28:34 +0000 (0:00:00.678) 0:01:57.129 ********* 2025-05-14 02:28:57.417944 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:28:57.417951 | orchestrator | 2025-05-14 02:28:57.417958 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-14 02:28:57.417974 | orchestrator | Wednesday 14 May 2025 02:28:35 +0000 (0:00:00.519) 0:01:57.649 ********* 2025-05-14 02:28:57.417981 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:28:57.417987 | orchestrator | 2025-05-14 02:28:57.417994 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-14 02:28:57.418008 | orchestrator | Wednesday 14 May 2025 02:28:37 +0000 (0:00:02.020) 0:01:59.670 ********* 2025-05-14 02:28:57.418081 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:28:57.418089 | orchestrator | 2025-05-14 02:28:57.418095 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-14 02:28:57.418102 | orchestrator | 2025-05-14 02:28:57.418109 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-14 02:28:57.418115 | orchestrator | Wednesday 14 May 2025 02:28:51 +0000 (0:00:14.070) 0:02:13.740 ********* 2025-05-14 02:28:57.418122 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:28:57.418129 | orchestrator | 2025-05-14 02:28:57.418136 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-14 02:28:57.418143 | orchestrator | Wednesday 14 May 2025 02:28:52 +0000 (0:00:00.701) 0:02:14.441 ********* 2025-05-14 02:28:57.418149 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-14 02:28:57.418156 | orchestrator | enable_outward_rabbitmq_True 2025-05-14 02:28:57.418162 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-14 02:28:57.418169 | orchestrator | outward_rabbitmq_restart 2025-05-14 02:28:57.418176 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:28:57.418183 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:28:57.418189 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:28:57.418196 | orchestrator | 2025-05-14 02:28:57.418203 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-14 02:28:57.418209 | orchestrator | skipping: no hosts matched 2025-05-14 02:28:57.418216 | orchestrator | 2025-05-14 02:28:57.418222 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-14 02:28:57.418228 | orchestrator | skipping: no hosts matched 2025-05-14 02:28:57.418234 | orchestrator | 2025-05-14 02:28:57.418240 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-14 02:28:57.418245 | orchestrator | skipping: no hosts matched 2025-05-14 02:28:57.418251 | orchestrator | 2025-05-14 02:28:57.418257 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:28:57.418263 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-14 02:28:57.418271 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 02:28:57.418278 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:28:57.418284 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:28:57.418290 | orchestrator | 2025-05-14 02:28:57.418297 | orchestrator | 2025-05-14 02:28:57.418303 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:28:57.418311 | orchestrator | Wednesday 14 May 2025 02:28:54 +0000 (0:00:02.401) 0:02:16.843 ********* 2025-05-14 02:28:57.418317 | orchestrator | =============================================================================== 2025-05-14 02:28:57.418324 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 85.05s 2025-05-14 02:28:57.418331 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.57s 2025-05-14 02:28:57.418338 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.74s 2025-05-14 02:28:57.418344 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.30s 2025-05-14 02:28:57.418354 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.69s 2025-05-14 02:28:57.418361 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.40s 2025-05-14 02:28:57.418376 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.40s 2025-05-14 02:28:57.418383 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.31s 2025-05-14 02:28:57.418389 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.26s 2025-05-14 02:28:57.418397 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.87s 2025-05-14 02:28:57.418404 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.67s 2025-05-14 02:28:57.418411 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.62s 2025-05-14 02:28:57.418418 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.50s 2025-05-14 02:28:57.418425 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.46s 2025-05-14 02:28:57.418432 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.40s 2025-05-14 02:28:57.418439 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.34s 2025-05-14 02:28:57.418446 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.14s 2025-05-14 02:28:57.418452 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.14s 2025-05-14 02:28:57.418459 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2025-05-14 02:28:57.418466 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.12s 2025-05-14 02:28:57.418481 | orchestrator | 2025-05-14 02:28:57 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:28:57.418489 | orchestrator | 2025-05-14 02:28:57 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:28:57.418496 | orchestrator | 2025-05-14 02:28:57 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:28:57.418504 | orchestrator | 2025-05-14 02:28:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:00.474951 | orchestrator | 2025-05-14 02:29:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:00.475077 | orchestrator | 2025-05-14 02:29:00 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:00.475273 | orchestrator | 2025-05-14 02:29:00 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:00.475850 | orchestrator | 2025-05-14 02:29:00 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:00.475870 | orchestrator | 2025-05-14 02:29:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:03.524148 | orchestrator | 2025-05-14 02:29:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:03.526214 | orchestrator | 2025-05-14 02:29:03 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:03.528285 | orchestrator | 2025-05-14 02:29:03 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:03.530826 | orchestrator | 2025-05-14 02:29:03 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:03.530936 | orchestrator | 2025-05-14 02:29:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:06.579312 | orchestrator | 2025-05-14 02:29:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:06.581207 | orchestrator | 2025-05-14 02:29:06 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:06.582459 | orchestrator | 2025-05-14 02:29:06 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:06.584545 | orchestrator | 2025-05-14 02:29:06 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:06.584644 | orchestrator | 2025-05-14 02:29:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:09.627570 | orchestrator | 2025-05-14 02:29:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:09.627719 | orchestrator | 2025-05-14 02:29:09 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:09.627738 | orchestrator | 2025-05-14 02:29:09 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:09.627950 | orchestrator | 2025-05-14 02:29:09 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:09.627972 | orchestrator | 2025-05-14 02:29:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:12.679154 | orchestrator | 2025-05-14 02:29:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:12.681322 | orchestrator | 2025-05-14 02:29:12 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:12.683886 | orchestrator | 2025-05-14 02:29:12 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:12.686089 | orchestrator | 2025-05-14 02:29:12 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:12.686180 | orchestrator | 2025-05-14 02:29:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:15.747518 | orchestrator | 2025-05-14 02:29:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:15.748912 | orchestrator | 2025-05-14 02:29:15 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:15.751163 | orchestrator | 2025-05-14 02:29:15 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:15.753437 | orchestrator | 2025-05-14 02:29:15 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:15.753629 | orchestrator | 2025-05-14 02:29:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:18.797935 | orchestrator | 2025-05-14 02:29:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:18.800346 | orchestrator | 2025-05-14 02:29:18 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:18.800945 | orchestrator | 2025-05-14 02:29:18 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:18.803015 | orchestrator | 2025-05-14 02:29:18 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:18.803058 | orchestrator | 2025-05-14 02:29:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:21.845857 | orchestrator | 2025-05-14 02:29:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:21.847507 | orchestrator | 2025-05-14 02:29:21 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:21.848524 | orchestrator | 2025-05-14 02:29:21 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:21.851207 | orchestrator | 2025-05-14 02:29:21 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:21.851317 | orchestrator | 2025-05-14 02:29:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:24.899299 | orchestrator | 2025-05-14 02:29:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:24.899394 | orchestrator | 2025-05-14 02:29:24 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:24.900013 | orchestrator | 2025-05-14 02:29:24 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:24.901083 | orchestrator | 2025-05-14 02:29:24 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:24.901134 | orchestrator | 2025-05-14 02:29:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:27.955182 | orchestrator | 2025-05-14 02:29:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:27.958245 | orchestrator | 2025-05-14 02:29:27 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:27.962591 | orchestrator | 2025-05-14 02:29:27 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:27.964896 | orchestrator | 2025-05-14 02:29:27 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:27.964922 | orchestrator | 2025-05-14 02:29:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:31.009663 | orchestrator | 2025-05-14 02:29:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:31.010169 | orchestrator | 2025-05-14 02:29:31 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:31.012820 | orchestrator | 2025-05-14 02:29:31 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:31.013506 | orchestrator | 2025-05-14 02:29:31 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:31.013645 | orchestrator | 2025-05-14 02:29:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:34.060735 | orchestrator | 2025-05-14 02:29:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:34.061025 | orchestrator | 2025-05-14 02:29:34 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:34.062003 | orchestrator | 2025-05-14 02:29:34 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:34.062647 | orchestrator | 2025-05-14 02:29:34 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:34.062665 | orchestrator | 2025-05-14 02:29:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:37.114341 | orchestrator | 2025-05-14 02:29:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:37.115057 | orchestrator | 2025-05-14 02:29:37 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:37.116025 | orchestrator | 2025-05-14 02:29:37 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:37.119740 | orchestrator | 2025-05-14 02:29:37 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:37.119932 | orchestrator | 2025-05-14 02:29:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:40.174920 | orchestrator | 2025-05-14 02:29:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:40.175027 | orchestrator | 2025-05-14 02:29:40 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:40.175380 | orchestrator | 2025-05-14 02:29:40 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:40.177160 | orchestrator | 2025-05-14 02:29:40 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:40.177200 | orchestrator | 2025-05-14 02:29:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:43.225503 | orchestrator | 2025-05-14 02:29:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:43.230964 | orchestrator | 2025-05-14 02:29:43 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:43.235620 | orchestrator | 2025-05-14 02:29:43 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:43.238856 | orchestrator | 2025-05-14 02:29:43 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:43.238910 | orchestrator | 2025-05-14 02:29:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:46.285643 | orchestrator | 2025-05-14 02:29:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:46.285808 | orchestrator | 2025-05-14 02:29:46 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:46.286154 | orchestrator | 2025-05-14 02:29:46 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:46.287594 | orchestrator | 2025-05-14 02:29:46 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:46.287624 | orchestrator | 2025-05-14 02:29:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:49.329986 | orchestrator | 2025-05-14 02:29:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:49.333998 | orchestrator | 2025-05-14 02:29:49 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:49.334089 | orchestrator | 2025-05-14 02:29:49 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:49.335082 | orchestrator | 2025-05-14 02:29:49 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:49.335329 | orchestrator | 2025-05-14 02:29:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:52.377867 | orchestrator | 2025-05-14 02:29:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:52.377971 | orchestrator | 2025-05-14 02:29:52 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:52.377984 | orchestrator | 2025-05-14 02:29:52 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:52.377995 | orchestrator | 2025-05-14 02:29:52 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:52.378005 | orchestrator | 2025-05-14 02:29:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:55.420698 | orchestrator | 2025-05-14 02:29:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:55.420916 | orchestrator | 2025-05-14 02:29:55 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:55.426491 | orchestrator | 2025-05-14 02:29:55 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:55.427331 | orchestrator | 2025-05-14 02:29:55 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:55.427370 | orchestrator | 2025-05-14 02:29:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:58.481870 | orchestrator | 2025-05-14 02:29:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:29:58.482305 | orchestrator | 2025-05-14 02:29:58 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:29:58.484983 | orchestrator | 2025-05-14 02:29:58 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:29:58.486366 | orchestrator | 2025-05-14 02:29:58 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:29:58.486427 | orchestrator | 2025-05-14 02:29:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:01.534352 | orchestrator | 2025-05-14 02:30:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:01.534453 | orchestrator | 2025-05-14 02:30:01 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:01.535087 | orchestrator | 2025-05-14 02:30:01 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:30:01.535962 | orchestrator | 2025-05-14 02:30:01 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:01.535990 | orchestrator | 2025-05-14 02:30:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:04.571370 | orchestrator | 2025-05-14 02:30:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:04.572247 | orchestrator | 2025-05-14 02:30:04 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:04.572279 | orchestrator | 2025-05-14 02:30:04 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state STARTED 2025-05-14 02:30:04.572873 | orchestrator | 2025-05-14 02:30:04 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:04.573975 | orchestrator | 2025-05-14 02:30:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:07.612561 | orchestrator | 2025-05-14 02:30:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:07.623035 | orchestrator | 2025-05-14 02:30:07.623101 | orchestrator | 2025-05-14 02:30:07.623155 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:30:07.623170 | orchestrator | 2025-05-14 02:30:07.623182 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:30:07.623194 | orchestrator | Wednesday 14 May 2025 02:27:35 +0000 (0:00:00.313) 0:00:00.313 ********* 2025-05-14 02:30:07.623205 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.623241 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.623252 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.623263 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:30:07.623274 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:30:07.623285 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:30:07.623296 | orchestrator | 2025-05-14 02:30:07.623378 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:30:07.623392 | orchestrator | Wednesday 14 May 2025 02:27:36 +0000 (0:00:00.680) 0:00:00.993 ********* 2025-05-14 02:30:07.623404 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-14 02:30:07.623415 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-14 02:30:07.623426 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-14 02:30:07.623437 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-14 02:30:07.623448 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-14 02:30:07.623459 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-14 02:30:07.623470 | orchestrator | 2025-05-14 02:30:07.623481 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-14 02:30:07.623492 | orchestrator | 2025-05-14 02:30:07.623503 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-14 02:30:07.623514 | orchestrator | Wednesday 14 May 2025 02:27:37 +0000 (0:00:01.352) 0:00:02.345 ********* 2025-05-14 02:30:07.623526 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:30:07.623538 | orchestrator | 2025-05-14 02:30:07.623549 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-14 02:30:07.623560 | orchestrator | Wednesday 14 May 2025 02:27:38 +0000 (0:00:01.565) 0:00:03.911 ********* 2025-05-14 02:30:07.623616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623660 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623673 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623718 | orchestrator | 2025-05-14 02:30:07.623731 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-14 02:30:07.623765 | orchestrator | Wednesday 14 May 2025 02:27:40 +0000 (0:00:01.972) 0:00:05.883 ********* 2025-05-14 02:30:07.623778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623832 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623846 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623872 | orchestrator | 2025-05-14 02:30:07.623885 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-14 02:30:07.623898 | orchestrator | Wednesday 14 May 2025 02:27:43 +0000 (0:00:02.826) 0:00:08.710 ********* 2025-05-14 02:30:07.623911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.623999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624010 | orchestrator | 2025-05-14 02:30:07.624022 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-14 02:30:07.624032 | orchestrator | Wednesday 14 May 2025 02:27:44 +0000 (0:00:00.985) 0:00:09.695 ********* 2025-05-14 02:30:07.624049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624112 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624124 | orchestrator | 2025-05-14 02:30:07.624135 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-14 02:30:07.624146 | orchestrator | Wednesday 14 May 2025 02:27:46 +0000 (0:00:02.003) 0:00:11.699 ********* 2025-05-14 02:30:07.624164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.624237 | orchestrator | 2025-05-14 02:30:07.624248 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-14 02:30:07.624259 | orchestrator | Wednesday 14 May 2025 02:27:48 +0000 (0:00:01.613) 0:00:13.312 ********* 2025-05-14 02:30:07.624270 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.624282 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.624293 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.624304 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:30:07.624315 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:30:07.624326 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:30:07.624337 | orchestrator | 2025-05-14 02:30:07.624348 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-14 02:30:07.624359 | orchestrator | Wednesday 14 May 2025 02:27:51 +0000 (0:00:03.284) 0:00:16.597 ********* 2025-05-14 02:30:07.624370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-14 02:30:07.624392 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-14 02:30:07.624403 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-14 02:30:07.624426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-14 02:30:07.624438 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-14 02:30:07.624449 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-14 02:30:07.624460 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:30:07.624527 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:30:07.624539 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:30:07.624550 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:30:07.624561 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:30:07.624572 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:30:07.624583 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:30:07.624596 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:30:07.624607 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:30:07.624618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:30:07.624629 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:30:07.624640 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:30:07.624651 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:30:07.624669 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:30:07.624681 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:30:07.624691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:30:07.624702 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:30:07.624713 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:30:07.624724 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:30:07.624735 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:30:07.624834 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:30:07.624846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:30:07.624857 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:30:07.624868 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:30:07.624879 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:30:07.624890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:30:07.624911 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:30:07.624922 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:30:07.624933 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:30:07.624944 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:30:07.624954 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-14 02:30:07.624966 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-14 02:30:07.624977 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-14 02:30:07.624988 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-14 02:30:07.625006 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-14 02:30:07.625018 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-14 02:30:07.625029 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-14 02:30:07.625040 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-14 02:30:07.625052 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-14 02:30:07.625063 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-14 02:30:07.625074 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-14 02:30:07.625085 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-14 02:30:07.625096 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-14 02:30:07.625107 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-14 02:30:07.625118 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-14 02:30:07.625128 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-14 02:30:07.625139 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-14 02:30:07.625150 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-14 02:30:07.625161 | orchestrator | 2025-05-14 02:30:07.625172 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:30:07.625183 | orchestrator | Wednesday 14 May 2025 02:28:12 +0000 (0:00:20.481) 0:00:37.078 ********* 2025-05-14 02:30:07.625194 | orchestrator | 2025-05-14 02:30:07.625210 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:30:07.625221 | orchestrator | Wednesday 14 May 2025 02:28:12 +0000 (0:00:00.058) 0:00:37.137 ********* 2025-05-14 02:30:07.625232 | orchestrator | 2025-05-14 02:30:07.625243 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:30:07.625254 | orchestrator | Wednesday 14 May 2025 02:28:12 +0000 (0:00:00.218) 0:00:37.356 ********* 2025-05-14 02:30:07.625271 | orchestrator | 2025-05-14 02:30:07.625282 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:30:07.625293 | orchestrator | Wednesday 14 May 2025 02:28:12 +0000 (0:00:00.054) 0:00:37.410 ********* 2025-05-14 02:30:07.625304 | orchestrator | 2025-05-14 02:30:07.625315 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:30:07.625326 | orchestrator | Wednesday 14 May 2025 02:28:12 +0000 (0:00:00.054) 0:00:37.465 ********* 2025-05-14 02:30:07.625337 | orchestrator | 2025-05-14 02:30:07.625348 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:30:07.625359 | orchestrator | Wednesday 14 May 2025 02:28:12 +0000 (0:00:00.055) 0:00:37.521 ********* 2025-05-14 02:30:07.625370 | orchestrator | 2025-05-14 02:30:07.625380 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-14 02:30:07.625391 | orchestrator | Wednesday 14 May 2025 02:28:12 +0000 (0:00:00.056) 0:00:37.577 ********* 2025-05-14 02:30:07.625402 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.625414 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:30:07.625425 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.625450 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:30:07.625462 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.625484 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:30:07.625495 | orchestrator | 2025-05-14 02:30:07.625506 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-14 02:30:07.625517 | orchestrator | Wednesday 14 May 2025 02:28:14 +0000 (0:00:02.300) 0:00:39.877 ********* 2025-05-14 02:30:07.625528 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.625539 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.625550 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.625561 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:30:07.625572 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:30:07.625583 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:30:07.625593 | orchestrator | 2025-05-14 02:30:07.625604 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-14 02:30:07.625615 | orchestrator | 2025-05-14 02:30:07.625626 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-14 02:30:07.625636 | orchestrator | Wednesday 14 May 2025 02:28:39 +0000 (0:00:24.446) 0:01:04.324 ********* 2025-05-14 02:30:07.625647 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:30:07.625658 | orchestrator | 2025-05-14 02:30:07.625669 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-14 02:30:07.625679 | orchestrator | Wednesday 14 May 2025 02:28:40 +0000 (0:00:00.609) 0:01:04.934 ********* 2025-05-14 02:30:07.625690 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:30:07.625701 | orchestrator | 2025-05-14 02:30:07.625718 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-14 02:30:07.625729 | orchestrator | Wednesday 14 May 2025 02:28:41 +0000 (0:00:01.039) 0:01:05.974 ********* 2025-05-14 02:30:07.625796 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.625809 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.625820 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.625831 | orchestrator | 2025-05-14 02:30:07.625842 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-14 02:30:07.625853 | orchestrator | Wednesday 14 May 2025 02:28:41 +0000 (0:00:00.925) 0:01:06.899 ********* 2025-05-14 02:30:07.625863 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.625872 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.625882 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.625891 | orchestrator | 2025-05-14 02:30:07.625901 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-14 02:30:07.625910 | orchestrator | Wednesday 14 May 2025 02:28:42 +0000 (0:00:00.362) 0:01:07.261 ********* 2025-05-14 02:30:07.625920 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.625937 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.625946 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.625956 | orchestrator | 2025-05-14 02:30:07.625966 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-14 02:30:07.625976 | orchestrator | Wednesday 14 May 2025 02:28:42 +0000 (0:00:00.370) 0:01:07.631 ********* 2025-05-14 02:30:07.625986 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.625995 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.626005 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.626063 | orchestrator | 2025-05-14 02:30:07.626077 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-14 02:30:07.626087 | orchestrator | Wednesday 14 May 2025 02:28:43 +0000 (0:00:00.481) 0:01:08.113 ********* 2025-05-14 02:30:07.626097 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.626107 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.626116 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.626126 | orchestrator | 2025-05-14 02:30:07.626136 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-14 02:30:07.626146 | orchestrator | Wednesday 14 May 2025 02:28:43 +0000 (0:00:00.322) 0:01:08.435 ********* 2025-05-14 02:30:07.626156 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626166 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626175 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626185 | orchestrator | 2025-05-14 02:30:07.626195 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-14 02:30:07.626205 | orchestrator | Wednesday 14 May 2025 02:28:43 +0000 (0:00:00.325) 0:01:08.761 ********* 2025-05-14 02:30:07.626229 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626239 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626249 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626259 | orchestrator | 2025-05-14 02:30:07.626279 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-14 02:30:07.626289 | orchestrator | Wednesday 14 May 2025 02:28:44 +0000 (0:00:00.326) 0:01:09.087 ********* 2025-05-14 02:30:07.626299 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626309 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626318 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626328 | orchestrator | 2025-05-14 02:30:07.626338 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-14 02:30:07.626347 | orchestrator | Wednesday 14 May 2025 02:28:44 +0000 (0:00:00.375) 0:01:09.463 ********* 2025-05-14 02:30:07.626357 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626367 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626376 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626386 | orchestrator | 2025-05-14 02:30:07.626396 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-14 02:30:07.626405 | orchestrator | Wednesday 14 May 2025 02:28:44 +0000 (0:00:00.243) 0:01:09.707 ********* 2025-05-14 02:30:07.626415 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626425 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626435 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626444 | orchestrator | 2025-05-14 02:30:07.626454 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-14 02:30:07.626464 | orchestrator | Wednesday 14 May 2025 02:28:45 +0000 (0:00:00.356) 0:01:10.064 ********* 2025-05-14 02:30:07.626474 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626484 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626493 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626503 | orchestrator | 2025-05-14 02:30:07.626513 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-14 02:30:07.626523 | orchestrator | Wednesday 14 May 2025 02:28:45 +0000 (0:00:00.334) 0:01:10.398 ********* 2025-05-14 02:30:07.626533 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626543 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626559 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626569 | orchestrator | 2025-05-14 02:30:07.626579 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-14 02:30:07.626589 | orchestrator | Wednesday 14 May 2025 02:28:45 +0000 (0:00:00.338) 0:01:10.737 ********* 2025-05-14 02:30:07.626598 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626608 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626617 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626627 | orchestrator | 2025-05-14 02:30:07.626636 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-14 02:30:07.626645 | orchestrator | Wednesday 14 May 2025 02:28:46 +0000 (0:00:00.265) 0:01:11.003 ********* 2025-05-14 02:30:07.626655 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626664 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626674 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626683 | orchestrator | 2025-05-14 02:30:07.626693 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-14 02:30:07.626702 | orchestrator | Wednesday 14 May 2025 02:28:46 +0000 (0:00:00.368) 0:01:11.371 ********* 2025-05-14 02:30:07.626712 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626722 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626731 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626761 | orchestrator | 2025-05-14 02:30:07.626777 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-14 02:30:07.626787 | orchestrator | Wednesday 14 May 2025 02:28:46 +0000 (0:00:00.324) 0:01:11.695 ********* 2025-05-14 02:30:07.626797 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626806 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626816 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626825 | orchestrator | 2025-05-14 02:30:07.626835 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-14 02:30:07.626844 | orchestrator | Wednesday 14 May 2025 02:28:47 +0000 (0:00:00.240) 0:01:11.936 ********* 2025-05-14 02:30:07.626854 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.626863 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.626873 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.626882 | orchestrator | 2025-05-14 02:30:07.626892 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-14 02:30:07.626902 | orchestrator | Wednesday 14 May 2025 02:28:47 +0000 (0:00:00.357) 0:01:12.293 ********* 2025-05-14 02:30:07.626912 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:30:07.626922 | orchestrator | 2025-05-14 02:30:07.626931 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-14 02:30:07.626941 | orchestrator | Wednesday 14 May 2025 02:28:48 +0000 (0:00:00.684) 0:01:12.977 ********* 2025-05-14 02:30:07.626951 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.626961 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.626970 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.626980 | orchestrator | 2025-05-14 02:30:07.626990 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-14 02:30:07.626999 | orchestrator | Wednesday 14 May 2025 02:28:48 +0000 (0:00:00.489) 0:01:13.467 ********* 2025-05-14 02:30:07.627009 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.627019 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.627028 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.627038 | orchestrator | 2025-05-14 02:30:07.627048 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-14 02:30:07.627058 | orchestrator | Wednesday 14 May 2025 02:28:49 +0000 (0:00:01.161) 0:01:14.629 ********* 2025-05-14 02:30:07.627068 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.627078 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.627087 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.627097 | orchestrator | 2025-05-14 02:30:07.627107 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-14 02:30:07.627150 | orchestrator | Wednesday 14 May 2025 02:28:50 +0000 (0:00:00.630) 0:01:15.260 ********* 2025-05-14 02:30:07.627160 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.627170 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.627179 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.627189 | orchestrator | 2025-05-14 02:30:07.627199 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-14 02:30:07.627209 | orchestrator | Wednesday 14 May 2025 02:28:50 +0000 (0:00:00.572) 0:01:15.832 ********* 2025-05-14 02:30:07.627218 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.627228 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.627238 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.627248 | orchestrator | 2025-05-14 02:30:07.627257 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-14 02:30:07.627267 | orchestrator | Wednesday 14 May 2025 02:28:51 +0000 (0:00:00.379) 0:01:16.212 ********* 2025-05-14 02:30:07.627277 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.627287 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.627296 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.627306 | orchestrator | 2025-05-14 02:30:07.627315 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-14 02:30:07.627325 | orchestrator | Wednesday 14 May 2025 02:28:51 +0000 (0:00:00.524) 0:01:16.736 ********* 2025-05-14 02:30:07.627335 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.627344 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.627354 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.627363 | orchestrator | 2025-05-14 02:30:07.627373 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-14 02:30:07.627383 | orchestrator | Wednesday 14 May 2025 02:28:52 +0000 (0:00:00.684) 0:01:17.420 ********* 2025-05-14 02:30:07.627393 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.627402 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.627412 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.627422 | orchestrator | 2025-05-14 02:30:07.627431 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-14 02:30:07.627441 | orchestrator | Wednesday 14 May 2025 02:28:53 +0000 (0:00:00.608) 0:01:18.028 ********* 2025-05-14 02:30:07.627452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627647 | orchestrator | 2025-05-14 02:30:07.627657 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-14 02:30:07.627667 | orchestrator | Wednesday 14 May 2025 02:28:54 +0000 (0:00:01.481) 0:01:19.510 ********* 2025-05-14 02:30:07.627677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627805 | orchestrator | 2025-05-14 02:30:07.627815 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-14 02:30:07.627825 | orchestrator | Wednesday 14 May 2025 02:28:58 +0000 (0:00:03.997) 0:01:23.507 ********* 2025-05-14 02:30:07.627835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.627943 | orchestrator | 2025-05-14 02:30:07.627952 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:30:07.627962 | orchestrator | Wednesday 14 May 2025 02:29:01 +0000 (0:00:02.762) 0:01:26.269 ********* 2025-05-14 02:30:07.627972 | orchestrator | 2025-05-14 02:30:07.627981 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:30:07.627990 | orchestrator | Wednesday 14 May 2025 02:29:01 +0000 (0:00:00.075) 0:01:26.344 ********* 2025-05-14 02:30:07.628000 | orchestrator | 2025-05-14 02:30:07.628009 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:30:07.628019 | orchestrator | Wednesday 14 May 2025 02:29:01 +0000 (0:00:00.067) 0:01:26.412 ********* 2025-05-14 02:30:07.628029 | orchestrator | 2025-05-14 02:30:07.628038 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-14 02:30:07.628047 | orchestrator | Wednesday 14 May 2025 02:29:01 +0000 (0:00:00.063) 0:01:26.475 ********* 2025-05-14 02:30:07.628057 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.628066 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.628076 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.628085 | orchestrator | 2025-05-14 02:30:07.628095 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-14 02:30:07.628105 | orchestrator | Wednesday 14 May 2025 02:29:09 +0000 (0:00:07.716) 0:01:34.192 ********* 2025-05-14 02:30:07.628114 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.628124 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.628139 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.628149 | orchestrator | 2025-05-14 02:30:07.628158 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-14 02:30:07.628168 | orchestrator | Wednesday 14 May 2025 02:29:16 +0000 (0:00:07.707) 0:01:41.900 ********* 2025-05-14 02:30:07.628178 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.628187 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.628197 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.628206 | orchestrator | 2025-05-14 02:30:07.628216 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-14 02:30:07.628225 | orchestrator | Wednesday 14 May 2025 02:29:24 +0000 (0:00:07.844) 0:01:49.745 ********* 2025-05-14 02:30:07.628234 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.628244 | orchestrator | 2025-05-14 02:30:07.628254 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-14 02:30:07.628263 | orchestrator | Wednesday 14 May 2025 02:29:24 +0000 (0:00:00.122) 0:01:49.867 ********* 2025-05-14 02:30:07.628273 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.628282 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.628291 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.628301 | orchestrator | 2025-05-14 02:30:07.628316 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-14 02:30:07.628326 | orchestrator | Wednesday 14 May 2025 02:29:26 +0000 (0:00:01.134) 0:01:51.002 ********* 2025-05-14 02:30:07.628335 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.628345 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.628354 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.628364 | orchestrator | 2025-05-14 02:30:07.628373 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-14 02:30:07.628383 | orchestrator | Wednesday 14 May 2025 02:29:26 +0000 (0:00:00.694) 0:01:51.696 ********* 2025-05-14 02:30:07.628392 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.628402 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.628411 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.628421 | orchestrator | 2025-05-14 02:30:07.628431 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-14 02:30:07.628440 | orchestrator | Wednesday 14 May 2025 02:29:27 +0000 (0:00:01.061) 0:01:52.758 ********* 2025-05-14 02:30:07.628450 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.628459 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.628469 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.628478 | orchestrator | 2025-05-14 02:30:07.628488 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-14 02:30:07.628497 | orchestrator | Wednesday 14 May 2025 02:29:28 +0000 (0:00:00.657) 0:01:53.416 ********* 2025-05-14 02:30:07.628507 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.628516 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.628526 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.628535 | orchestrator | 2025-05-14 02:30:07.628545 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-14 02:30:07.628554 | orchestrator | Wednesday 14 May 2025 02:29:29 +0000 (0:00:01.303) 0:01:54.719 ********* 2025-05-14 02:30:07.628564 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.628573 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.628583 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.628592 | orchestrator | 2025-05-14 02:30:07.628601 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-14 02:30:07.628611 | orchestrator | Wednesday 14 May 2025 02:29:30 +0000 (0:00:00.884) 0:01:55.604 ********* 2025-05-14 02:30:07.628620 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.628630 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.628639 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.628649 | orchestrator | 2025-05-14 02:30:07.628658 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-14 02:30:07.628667 | orchestrator | Wednesday 14 May 2025 02:29:31 +0000 (0:00:00.463) 0:01:56.067 ********* 2025-05-14 02:30:07.628688 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628698 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628709 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628719 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628729 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628783 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628802 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628813 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628823 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628832 | orchestrator | 2025-05-14 02:30:07.628842 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-14 02:30:07.628858 | orchestrator | Wednesday 14 May 2025 02:29:32 +0000 (0:00:01.676) 0:01:57.744 ********* 2025-05-14 02:30:07.628868 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628883 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628892 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628902 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628938 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.628968 | orchestrator | 2025-05-14 02:30:07.628977 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-14 02:30:07.628993 | orchestrator | Wednesday 14 May 2025 02:29:37 +0000 (0:00:04.604) 0:02:02.348 ********* 2025-05-14 02:30:07.629003 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.629013 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.629027 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.629037 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.629046 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.629055 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.629063 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.629076 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.629084 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:30:07.629092 | orchestrator | 2025-05-14 02:30:07.629105 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:30:07.629113 | orchestrator | Wednesday 14 May 2025 02:29:40 +0000 (0:00:03.374) 0:02:05.723 ********* 2025-05-14 02:30:07.629121 | orchestrator | 2025-05-14 02:30:07.629129 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:30:07.629137 | orchestrator | Wednesday 14 May 2025 02:29:40 +0000 (0:00:00.057) 0:02:05.780 ********* 2025-05-14 02:30:07.629145 | orchestrator | 2025-05-14 02:30:07.629152 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:30:07.629160 | orchestrator | Wednesday 14 May 2025 02:29:41 +0000 (0:00:00.212) 0:02:05.992 ********* 2025-05-14 02:30:07.629168 | orchestrator | 2025-05-14 02:30:07.629176 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-14 02:30:07.629184 | orchestrator | Wednesday 14 May 2025 02:29:41 +0000 (0:00:00.057) 0:02:06.050 ********* 2025-05-14 02:30:07.629191 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.629199 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.629207 | orchestrator | 2025-05-14 02:30:07.629214 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-14 02:30:07.629222 | orchestrator | Wednesday 14 May 2025 02:29:47 +0000 (0:00:06.239) 0:02:12.289 ********* 2025-05-14 02:30:07.629230 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.629237 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.629245 | orchestrator | 2025-05-14 02:30:07.629253 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-14 02:30:07.629261 | orchestrator | Wednesday 14 May 2025 02:29:53 +0000 (0:00:06.524) 0:02:18.814 ********* 2025-05-14 02:30:07.629268 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.629276 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.629284 | orchestrator | 2025-05-14 02:30:07.629298 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-14 02:30:07.629306 | orchestrator | Wednesday 14 May 2025 02:30:00 +0000 (0:00:06.432) 0:02:25.247 ********* 2025-05-14 02:30:07.629313 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.629321 | orchestrator | 2025-05-14 02:30:07.629329 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-14 02:30:07.629337 | orchestrator | Wednesday 14 May 2025 02:30:00 +0000 (0:00:00.294) 0:02:25.542 ********* 2025-05-14 02:30:07.629344 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.629352 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.629360 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.629368 | orchestrator | 2025-05-14 02:30:07.629375 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-14 02:30:07.629383 | orchestrator | Wednesday 14 May 2025 02:30:01 +0000 (0:00:00.767) 0:02:26.309 ********* 2025-05-14 02:30:07.629391 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.629399 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.629406 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.629415 | orchestrator | 2025-05-14 02:30:07.629422 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-14 02:30:07.629430 | orchestrator | Wednesday 14 May 2025 02:30:02 +0000 (0:00:00.698) 0:02:27.007 ********* 2025-05-14 02:30:07.629438 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.629445 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.629453 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.629461 | orchestrator | 2025-05-14 02:30:07.629469 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-14 02:30:07.629476 | orchestrator | Wednesday 14 May 2025 02:30:03 +0000 (0:00:00.957) 0:02:27.965 ********* 2025-05-14 02:30:07.629484 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.629492 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.629499 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.629507 | orchestrator | 2025-05-14 02:30:07.629515 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-14 02:30:07.629530 | orchestrator | Wednesday 14 May 2025 02:30:03 +0000 (0:00:00.905) 0:02:28.870 ********* 2025-05-14 02:30:07.629538 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.629546 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.629554 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.629562 | orchestrator | 2025-05-14 02:30:07.629570 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-14 02:30:07.629578 | orchestrator | Wednesday 14 May 2025 02:30:05 +0000 (0:00:01.113) 0:02:29.984 ********* 2025-05-14 02:30:07.629585 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.629593 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.629601 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.629609 | orchestrator | 2025-05-14 02:30:07.629616 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:30:07.629624 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-14 02:30:07.629632 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-14 02:30:07.629645 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-14 02:30:07.629654 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:30:07.629662 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:30:07.629670 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:30:07.629677 | orchestrator | 2025-05-14 02:30:07.629685 | orchestrator | 2025-05-14 02:30:07.629693 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:30:07.629701 | orchestrator | Wednesday 14 May 2025 02:30:06 +0000 (0:00:01.431) 0:02:31.416 ********* 2025-05-14 02:30:07.629709 | orchestrator | =============================================================================== 2025-05-14 02:30:07.629716 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 24.45s 2025-05-14 02:30:07.629724 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.48s 2025-05-14 02:30:07.629732 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.28s 2025-05-14 02:30:07.629754 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.23s 2025-05-14 02:30:07.629762 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.96s 2025-05-14 02:30:07.629770 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.60s 2025-05-14 02:30:07.629778 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.00s 2025-05-14 02:30:07.629786 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.37s 2025-05-14 02:30:07.629794 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.28s 2025-05-14 02:30:07.629801 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.83s 2025-05-14 02:30:07.629809 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.76s 2025-05-14 02:30:07.629817 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.30s 2025-05-14 02:30:07.629824 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.00s 2025-05-14 02:30:07.629837 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.97s 2025-05-14 02:30:07.629845 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.68s 2025-05-14 02:30:07.629852 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.61s 2025-05-14 02:30:07.629865 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.57s 2025-05-14 02:30:07.629873 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2025-05-14 02:30:07.629881 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.43s 2025-05-14 02:30:07.629888 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2025-05-14 02:30:07.629896 | orchestrator | 2025-05-14 02:30:07 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:07.629904 | orchestrator | 2025-05-14 02:30:07 | INFO  | Task 999a4d04-0e3d-415e-bf9a-f4a2828d2be2 is in state SUCCESS 2025-05-14 02:30:07.629912 | orchestrator | 2025-05-14 02:30:07 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:07.629920 | orchestrator | 2025-05-14 02:30:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:10.682543 | orchestrator | 2025-05-14 02:30:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:10.683172 | orchestrator | 2025-05-14 02:30:10 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:10.685082 | orchestrator | 2025-05-14 02:30:10 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:10.685117 | orchestrator | 2025-05-14 02:30:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:13.749139 | orchestrator | 2025-05-14 02:30:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:13.749519 | orchestrator | 2025-05-14 02:30:13 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:13.750916 | orchestrator | 2025-05-14 02:30:13 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:13.750984 | orchestrator | 2025-05-14 02:30:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:16.794660 | orchestrator | 2025-05-14 02:30:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:16.794801 | orchestrator | 2025-05-14 02:30:16 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:16.794817 | orchestrator | 2025-05-14 02:30:16 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:16.794828 | orchestrator | 2025-05-14 02:30:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:19.849111 | orchestrator | 2025-05-14 02:30:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:19.851031 | orchestrator | 2025-05-14 02:30:19 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:19.854429 | orchestrator | 2025-05-14 02:30:19 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:19.854525 | orchestrator | 2025-05-14 02:30:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:22.894078 | orchestrator | 2025-05-14 02:30:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:22.895882 | orchestrator | 2025-05-14 02:30:22 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:22.896285 | orchestrator | 2025-05-14 02:30:22 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:22.896325 | orchestrator | 2025-05-14 02:30:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:25.954471 | orchestrator | 2025-05-14 02:30:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:25.955786 | orchestrator | 2025-05-14 02:30:25 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:25.957111 | orchestrator | 2025-05-14 02:30:25 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:25.957160 | orchestrator | 2025-05-14 02:30:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:29.020121 | orchestrator | 2025-05-14 02:30:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:29.020893 | orchestrator | 2025-05-14 02:30:29 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:29.022784 | orchestrator | 2025-05-14 02:30:29 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:29.022866 | orchestrator | 2025-05-14 02:30:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:32.073821 | orchestrator | 2025-05-14 02:30:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:32.074583 | orchestrator | 2025-05-14 02:30:32 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:32.074797 | orchestrator | 2025-05-14 02:30:32 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:32.074993 | orchestrator | 2025-05-14 02:30:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:35.117935 | orchestrator | 2025-05-14 02:30:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:35.120792 | orchestrator | 2025-05-14 02:30:35 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:35.122384 | orchestrator | 2025-05-14 02:30:35 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:35.122438 | orchestrator | 2025-05-14 02:30:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:38.164940 | orchestrator | 2025-05-14 02:30:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:38.165189 | orchestrator | 2025-05-14 02:30:38 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:38.166504 | orchestrator | 2025-05-14 02:30:38 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:38.166564 | orchestrator | 2025-05-14 02:30:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:41.220931 | orchestrator | 2025-05-14 02:30:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:41.221502 | orchestrator | 2025-05-14 02:30:41 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:41.223362 | orchestrator | 2025-05-14 02:30:41 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:41.224559 | orchestrator | 2025-05-14 02:30:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:44.280196 | orchestrator | 2025-05-14 02:30:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:44.280295 | orchestrator | 2025-05-14 02:30:44 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:44.281790 | orchestrator | 2025-05-14 02:30:44 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:44.281816 | orchestrator | 2025-05-14 02:30:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:47.322240 | orchestrator | 2025-05-14 02:30:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:47.322345 | orchestrator | 2025-05-14 02:30:47 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:47.323983 | orchestrator | 2025-05-14 02:30:47 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:47.324269 | orchestrator | 2025-05-14 02:30:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:50.384000 | orchestrator | 2025-05-14 02:30:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:50.384660 | orchestrator | 2025-05-14 02:30:50 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:50.386277 | orchestrator | 2025-05-14 02:30:50 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:50.386439 | orchestrator | 2025-05-14 02:30:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:53.443478 | orchestrator | 2025-05-14 02:30:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:53.446843 | orchestrator | 2025-05-14 02:30:53 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:53.450506 | orchestrator | 2025-05-14 02:30:53 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:53.450539 | orchestrator | 2025-05-14 02:30:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:56.488902 | orchestrator | 2025-05-14 02:30:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:56.490074 | orchestrator | 2025-05-14 02:30:56 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:56.491550 | orchestrator | 2025-05-14 02:30:56 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:56.491604 | orchestrator | 2025-05-14 02:30:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:59.542386 | orchestrator | 2025-05-14 02:30:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:30:59.544499 | orchestrator | 2025-05-14 02:30:59 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:30:59.545932 | orchestrator | 2025-05-14 02:30:59 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:30:59.546129 | orchestrator | 2025-05-14 02:30:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:02.596098 | orchestrator | 2025-05-14 02:31:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:02.597750 | orchestrator | 2025-05-14 02:31:02 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:02.599849 | orchestrator | 2025-05-14 02:31:02 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:02.599942 | orchestrator | 2025-05-14 02:31:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:05.661232 | orchestrator | 2025-05-14 02:31:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:05.662209 | orchestrator | 2025-05-14 02:31:05 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:05.662602 | orchestrator | 2025-05-14 02:31:05 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:05.662630 | orchestrator | 2025-05-14 02:31:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:08.714501 | orchestrator | 2025-05-14 02:31:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:08.714606 | orchestrator | 2025-05-14 02:31:08 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:08.716041 | orchestrator | 2025-05-14 02:31:08 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:08.716104 | orchestrator | 2025-05-14 02:31:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:11.764459 | orchestrator | 2025-05-14 02:31:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:11.764564 | orchestrator | 2025-05-14 02:31:11 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:11.764579 | orchestrator | 2025-05-14 02:31:11 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:11.764591 | orchestrator | 2025-05-14 02:31:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:14.825584 | orchestrator | 2025-05-14 02:31:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:14.825669 | orchestrator | 2025-05-14 02:31:14 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:14.825680 | orchestrator | 2025-05-14 02:31:14 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:14.825690 | orchestrator | 2025-05-14 02:31:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:17.864937 | orchestrator | 2025-05-14 02:31:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:17.865046 | orchestrator | 2025-05-14 02:31:17 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:17.865616 | orchestrator | 2025-05-14 02:31:17 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:17.865642 | orchestrator | 2025-05-14 02:31:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:20.906933 | orchestrator | 2025-05-14 02:31:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:20.907324 | orchestrator | 2025-05-14 02:31:20 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:20.908058 | orchestrator | 2025-05-14 02:31:20 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:20.908086 | orchestrator | 2025-05-14 02:31:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:23.951523 | orchestrator | 2025-05-14 02:31:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:23.951854 | orchestrator | 2025-05-14 02:31:23 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:23.951888 | orchestrator | 2025-05-14 02:31:23 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:23.951922 | orchestrator | 2025-05-14 02:31:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:26.999491 | orchestrator | 2025-05-14 02:31:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:26.999590 | orchestrator | 2025-05-14 02:31:26 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:26.999605 | orchestrator | 2025-05-14 02:31:26 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:26.999617 | orchestrator | 2025-05-14 02:31:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:30.033924 | orchestrator | 2025-05-14 02:31:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:30.034577 | orchestrator | 2025-05-14 02:31:30 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:30.034760 | orchestrator | 2025-05-14 02:31:30 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:30.034791 | orchestrator | 2025-05-14 02:31:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:33.081242 | orchestrator | 2025-05-14 02:31:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:33.081417 | orchestrator | 2025-05-14 02:31:33 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:33.083983 | orchestrator | 2025-05-14 02:31:33 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:33.084027 | orchestrator | 2025-05-14 02:31:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:36.130535 | orchestrator | 2025-05-14 02:31:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:36.131218 | orchestrator | 2025-05-14 02:31:36 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:36.133070 | orchestrator | 2025-05-14 02:31:36 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:36.133111 | orchestrator | 2025-05-14 02:31:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:39.177656 | orchestrator | 2025-05-14 02:31:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:39.178220 | orchestrator | 2025-05-14 02:31:39 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:39.181044 | orchestrator | 2025-05-14 02:31:39 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:39.181348 | orchestrator | 2025-05-14 02:31:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:42.230917 | orchestrator | 2025-05-14 02:31:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:42.232809 | orchestrator | 2025-05-14 02:31:42 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:42.234877 | orchestrator | 2025-05-14 02:31:42 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:42.235139 | orchestrator | 2025-05-14 02:31:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:45.280019 | orchestrator | 2025-05-14 02:31:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:45.280175 | orchestrator | 2025-05-14 02:31:45 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:45.280266 | orchestrator | 2025-05-14 02:31:45 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:45.280283 | orchestrator | 2025-05-14 02:31:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:48.328838 | orchestrator | 2025-05-14 02:31:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:48.334642 | orchestrator | 2025-05-14 02:31:48 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:48.340511 | orchestrator | 2025-05-14 02:31:48 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:48.340587 | orchestrator | 2025-05-14 02:31:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:51.387030 | orchestrator | 2025-05-14 02:31:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:51.387257 | orchestrator | 2025-05-14 02:31:51 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:51.387887 | orchestrator | 2025-05-14 02:31:51 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:51.387930 | orchestrator | 2025-05-14 02:31:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:54.439485 | orchestrator | 2025-05-14 02:31:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:54.440920 | orchestrator | 2025-05-14 02:31:54 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:54.443429 | orchestrator | 2025-05-14 02:31:54 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:54.443483 | orchestrator | 2025-05-14 02:31:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:57.496063 | orchestrator | 2025-05-14 02:31:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:31:57.499080 | orchestrator | 2025-05-14 02:31:57 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:31:57.502293 | orchestrator | 2025-05-14 02:31:57 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:31:57.502365 | orchestrator | 2025-05-14 02:31:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:00.559579 | orchestrator | 2025-05-14 02:32:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:00.559785 | orchestrator | 2025-05-14 02:32:00 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:00.560543 | orchestrator | 2025-05-14 02:32:00 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:00.560741 | orchestrator | 2025-05-14 02:32:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:03.607751 | orchestrator | 2025-05-14 02:32:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:03.617945 | orchestrator | 2025-05-14 02:32:03 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:03.622606 | orchestrator | 2025-05-14 02:32:03 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:03.622706 | orchestrator | 2025-05-14 02:32:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:06.685898 | orchestrator | 2025-05-14 02:32:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:06.686774 | orchestrator | 2025-05-14 02:32:06 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:06.689256 | orchestrator | 2025-05-14 02:32:06 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:06.689297 | orchestrator | 2025-05-14 02:32:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:09.743411 | orchestrator | 2025-05-14 02:32:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:09.745302 | orchestrator | 2025-05-14 02:32:09 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:09.747036 | orchestrator | 2025-05-14 02:32:09 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:09.747142 | orchestrator | 2025-05-14 02:32:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:12.808171 | orchestrator | 2025-05-14 02:32:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:12.810162 | orchestrator | 2025-05-14 02:32:12 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:12.811222 | orchestrator | 2025-05-14 02:32:12 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:12.811259 | orchestrator | 2025-05-14 02:32:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:15.872006 | orchestrator | 2025-05-14 02:32:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:15.873876 | orchestrator | 2025-05-14 02:32:15 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:15.876755 | orchestrator | 2025-05-14 02:32:15 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:15.877171 | orchestrator | 2025-05-14 02:32:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:18.932850 | orchestrator | 2025-05-14 02:32:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:18.934732 | orchestrator | 2025-05-14 02:32:18 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:18.936841 | orchestrator | 2025-05-14 02:32:18 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:18.936959 | orchestrator | 2025-05-14 02:32:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:21.997364 | orchestrator | 2025-05-14 02:32:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:21.999310 | orchestrator | 2025-05-14 02:32:21 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:22.001956 | orchestrator | 2025-05-14 02:32:21 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:22.002101 | orchestrator | 2025-05-14 02:32:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:25.050298 | orchestrator | 2025-05-14 02:32:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:25.051019 | orchestrator | 2025-05-14 02:32:25 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:25.052601 | orchestrator | 2025-05-14 02:32:25 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:25.052732 | orchestrator | 2025-05-14 02:32:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:28.112349 | orchestrator | 2025-05-14 02:32:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:28.114612 | orchestrator | 2025-05-14 02:32:28 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:28.118122 | orchestrator | 2025-05-14 02:32:28 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:28.118175 | orchestrator | 2025-05-14 02:32:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:31.172257 | orchestrator | 2025-05-14 02:32:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:31.173159 | orchestrator | 2025-05-14 02:32:31 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:31.174534 | orchestrator | 2025-05-14 02:32:31 | INFO  | Task 4ac71b95-9eb3-4e7e-b6b8-d10f3f9fc682 is in state STARTED 2025-05-14 02:32:31.175581 | orchestrator | 2025-05-14 02:32:31 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:31.175781 | orchestrator | 2025-05-14 02:32:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:34.221193 | orchestrator | 2025-05-14 02:32:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:34.224533 | orchestrator | 2025-05-14 02:32:34 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:34.224573 | orchestrator | 2025-05-14 02:32:34 | INFO  | Task 4ac71b95-9eb3-4e7e-b6b8-d10f3f9fc682 is in state STARTED 2025-05-14 02:32:34.226820 | orchestrator | 2025-05-14 02:32:34 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:34.226926 | orchestrator | 2025-05-14 02:32:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:37.288121 | orchestrator | 2025-05-14 02:32:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:37.288402 | orchestrator | 2025-05-14 02:32:37 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:37.289187 | orchestrator | 2025-05-14 02:32:37 | INFO  | Task 4ac71b95-9eb3-4e7e-b6b8-d10f3f9fc682 is in state STARTED 2025-05-14 02:32:37.289739 | orchestrator | 2025-05-14 02:32:37 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:37.289809 | orchestrator | 2025-05-14 02:32:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:40.338307 | orchestrator | 2025-05-14 02:32:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:40.340922 | orchestrator | 2025-05-14 02:32:40 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:40.342345 | orchestrator | 2025-05-14 02:32:40 | INFO  | Task 4ac71b95-9eb3-4e7e-b6b8-d10f3f9fc682 is in state SUCCESS 2025-05-14 02:32:40.344077 | orchestrator | 2025-05-14 02:32:40 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:40.344152 | orchestrator | 2025-05-14 02:32:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:43.396315 | orchestrator | 2025-05-14 02:32:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:43.398491 | orchestrator | 2025-05-14 02:32:43 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:43.401037 | orchestrator | 2025-05-14 02:32:43 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:43.401077 | orchestrator | 2025-05-14 02:32:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:46.449416 | orchestrator | 2025-05-14 02:32:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:46.449593 | orchestrator | 2025-05-14 02:32:46 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:46.449843 | orchestrator | 2025-05-14 02:32:46 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:46.449867 | orchestrator | 2025-05-14 02:32:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:49.502461 | orchestrator | 2025-05-14 02:32:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:49.502639 | orchestrator | 2025-05-14 02:32:49 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:49.502849 | orchestrator | 2025-05-14 02:32:49 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:49.502870 | orchestrator | 2025-05-14 02:32:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:52.556055 | orchestrator | 2025-05-14 02:32:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:52.557966 | orchestrator | 2025-05-14 02:32:52 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:52.561034 | orchestrator | 2025-05-14 02:32:52 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:52.561075 | orchestrator | 2025-05-14 02:32:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:55.612184 | orchestrator | 2025-05-14 02:32:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:55.612540 | orchestrator | 2025-05-14 02:32:55 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:55.613939 | orchestrator | 2025-05-14 02:32:55 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:55.613982 | orchestrator | 2025-05-14 02:32:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:58.668046 | orchestrator | 2025-05-14 02:32:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:32:58.670784 | orchestrator | 2025-05-14 02:32:58 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:32:58.672403 | orchestrator | 2025-05-14 02:32:58 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:32:58.672543 | orchestrator | 2025-05-14 02:32:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:01.731386 | orchestrator | 2025-05-14 02:33:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:01.731555 | orchestrator | 2025-05-14 02:33:01 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:01.733395 | orchestrator | 2025-05-14 02:33:01 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:01.733429 | orchestrator | 2025-05-14 02:33:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:04.783526 | orchestrator | 2025-05-14 02:33:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:04.783641 | orchestrator | 2025-05-14 02:33:04 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:04.784346 | orchestrator | 2025-05-14 02:33:04 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:04.784444 | orchestrator | 2025-05-14 02:33:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:07.824600 | orchestrator | 2025-05-14 02:33:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:07.824756 | orchestrator | 2025-05-14 02:33:07 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:07.824773 | orchestrator | 2025-05-14 02:33:07 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:07.824785 | orchestrator | 2025-05-14 02:33:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:10.876716 | orchestrator | 2025-05-14 02:33:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:10.877305 | orchestrator | 2025-05-14 02:33:10 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:10.878353 | orchestrator | 2025-05-14 02:33:10 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:10.878419 | orchestrator | 2025-05-14 02:33:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:13.940526 | orchestrator | 2025-05-14 02:33:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:13.941626 | orchestrator | 2025-05-14 02:33:13 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:13.942759 | orchestrator | 2025-05-14 02:33:13 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:13.942772 | orchestrator | 2025-05-14 02:33:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:16.976577 | orchestrator | 2025-05-14 02:33:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:16.976740 | orchestrator | 2025-05-14 02:33:16 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:16.977394 | orchestrator | 2025-05-14 02:33:16 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:16.977537 | orchestrator | 2025-05-14 02:33:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:20.020752 | orchestrator | 2025-05-14 02:33:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:20.020966 | orchestrator | 2025-05-14 02:33:20 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:20.021759 | orchestrator | 2025-05-14 02:33:20 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:20.021783 | orchestrator | 2025-05-14 02:33:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:23.057947 | orchestrator | 2025-05-14 02:33:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:23.059931 | orchestrator | 2025-05-14 02:33:23 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:23.059965 | orchestrator | 2025-05-14 02:33:23 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:23.059986 | orchestrator | 2025-05-14 02:33:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:26.124568 | orchestrator | 2025-05-14 02:33:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:26.126103 | orchestrator | 2025-05-14 02:33:26 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:26.128506 | orchestrator | 2025-05-14 02:33:26 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:26.128983 | orchestrator | 2025-05-14 02:33:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:29.183618 | orchestrator | 2025-05-14 02:33:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:29.185321 | orchestrator | 2025-05-14 02:33:29 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:29.187728 | orchestrator | 2025-05-14 02:33:29 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:29.189416 | orchestrator | 2025-05-14 02:33:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:32.229748 | orchestrator | 2025-05-14 02:33:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:32.231893 | orchestrator | 2025-05-14 02:33:32 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:32.233233 | orchestrator | 2025-05-14 02:33:32 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state STARTED 2025-05-14 02:33:32.233381 | orchestrator | 2025-05-14 02:33:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:35.274802 | orchestrator | 2025-05-14 02:33:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:35.274945 | orchestrator | 2025-05-14 02:33:35 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:35.275613 | orchestrator | 2025-05-14 02:33:35 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:33:35.280420 | orchestrator | 2025-05-14 02:33:35 | INFO  | Task 0851bd95-ea37-4667-aa4b-593edd0419a2 is in state SUCCESS 2025-05-14 02:33:35.282312 | orchestrator | 2025-05-14 02:33:35.282347 | orchestrator | None 2025-05-14 02:33:35.282354 | orchestrator | 2025-05-14 02:33:35.282360 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:33:35.282368 | orchestrator | 2025-05-14 02:33:35.282374 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:33:35.282437 | orchestrator | Wednesday 14 May 2025 02:26:10 +0000 (0:00:00.716) 0:00:00.716 ********* 2025-05-14 02:33:35.282463 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.282505 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.282512 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.282517 | orchestrator | 2025-05-14 02:33:35.282552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:33:35.282558 | orchestrator | Wednesday 14 May 2025 02:26:11 +0000 (0:00:00.504) 0:00:01.221 ********* 2025-05-14 02:33:35.282565 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-14 02:33:35.282572 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-14 02:33:35.282578 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-14 02:33:35.282584 | orchestrator | 2025-05-14 02:33:35.282590 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-14 02:33:35.282596 | orchestrator | 2025-05-14 02:33:35.282602 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-14 02:33:35.282608 | orchestrator | Wednesday 14 May 2025 02:26:11 +0000 (0:00:00.490) 0:00:01.712 ********* 2025-05-14 02:33:35.282661 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.282668 | orchestrator | 2025-05-14 02:33:35.282675 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-14 02:33:35.282681 | orchestrator | Wednesday 14 May 2025 02:26:13 +0000 (0:00:01.894) 0:00:03.606 ********* 2025-05-14 02:33:35.282687 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.282693 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.282698 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.282705 | orchestrator | 2025-05-14 02:33:35.282711 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-14 02:33:35.282717 | orchestrator | Wednesday 14 May 2025 02:26:15 +0000 (0:00:01.362) 0:00:04.969 ********* 2025-05-14 02:33:35.282739 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.282745 | orchestrator | 2025-05-14 02:33:35.282751 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-14 02:33:35.282757 | orchestrator | Wednesday 14 May 2025 02:26:16 +0000 (0:00:01.242) 0:00:06.212 ********* 2025-05-14 02:33:35.282763 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.282769 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.282775 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.282782 | orchestrator | 2025-05-14 02:33:35.282788 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-14 02:33:35.282814 | orchestrator | Wednesday 14 May 2025 02:26:17 +0000 (0:00:00.990) 0:00:07.202 ********* 2025-05-14 02:33:35.282822 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:33:35.282827 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:33:35.282833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:33:35.282839 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:33:35.282845 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:33:35.282851 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:33:35.282857 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-14 02:33:35.282865 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-14 02:33:35.282870 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-14 02:33:35.282876 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-14 02:33:35.282882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-14 02:33:35.282895 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-14 02:33:35.282901 | orchestrator | 2025-05-14 02:33:35.282907 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-14 02:33:35.282913 | orchestrator | Wednesday 14 May 2025 02:26:20 +0000 (0:00:03.021) 0:00:10.224 ********* 2025-05-14 02:33:35.282919 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-14 02:33:35.282925 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-14 02:33:35.282931 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-14 02:33:35.282937 | orchestrator | 2025-05-14 02:33:35.282943 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-14 02:33:35.282949 | orchestrator | Wednesday 14 May 2025 02:26:21 +0000 (0:00:01.367) 0:00:11.591 ********* 2025-05-14 02:33:35.282954 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-14 02:33:35.282961 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-14 02:33:35.282967 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-14 02:33:35.282973 | orchestrator | 2025-05-14 02:33:35.282979 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-14 02:33:35.282985 | orchestrator | Wednesday 14 May 2025 02:26:24 +0000 (0:00:02.616) 0:00:14.208 ********* 2025-05-14 02:33:35.282991 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-14 02:33:35.282997 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.283091 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-14 02:33:35.283097 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.283103 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-14 02:33:35.283109 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.283115 | orchestrator | 2025-05-14 02:33:35.283120 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-14 02:33:35.283131 | orchestrator | Wednesday 14 May 2025 02:26:25 +0000 (0:00:01.199) 0:00:15.407 ********* 2025-05-14 02:33:35.283141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.283222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.283230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.283243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.283254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.283260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.283267 | orchestrator | 2025-05-14 02:33:35.283273 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-14 02:33:35.283280 | orchestrator | Wednesday 14 May 2025 02:26:28 +0000 (0:00:03.039) 0:00:18.447 ********* 2025-05-14 02:33:35.283286 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.283292 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.283298 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.283305 | orchestrator | 2025-05-14 02:33:35.283338 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-14 02:33:35.283345 | orchestrator | Wednesday 14 May 2025 02:26:31 +0000 (0:00:03.002) 0:00:21.449 ********* 2025-05-14 02:33:35.283351 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-14 02:33:35.283357 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-14 02:33:35.283367 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-14 02:33:35.283372 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-14 02:33:35.283379 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-14 02:33:35.283385 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-14 02:33:35.283391 | orchestrator | 2025-05-14 02:33:35.283397 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-14 02:33:35.283403 | orchestrator | Wednesday 14 May 2025 02:26:36 +0000 (0:00:04.633) 0:00:26.083 ********* 2025-05-14 02:33:35.283409 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.283414 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.283420 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.283426 | orchestrator | 2025-05-14 02:33:35.283432 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-14 02:33:35.283438 | orchestrator | Wednesday 14 May 2025 02:26:38 +0000 (0:00:02.061) 0:00:28.145 ********* 2025-05-14 02:33:35.283444 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.283451 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.283456 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.283462 | orchestrator | 2025-05-14 02:33:35.283468 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-14 02:33:35.283498 | orchestrator | Wednesday 14 May 2025 02:26:40 +0000 (0:00:01.912) 0:00:30.057 ********* 2025-05-14 02:33:35.283516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-14 02:33:35.283522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-14 02:33:35.283529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-14 02:33:35.283556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:33:35.283584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:33:35.283591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:33:35.283603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.283610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.283616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.283665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.283673 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.283681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.283687 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.283701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.283712 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.283719 | orchestrator | 2025-05-14 02:33:35.283725 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-14 02:33:35.283731 | orchestrator | Wednesday 14 May 2025 02:26:43 +0000 (0:00:03.439) 0:00:33.496 ********* 2025-05-14 02:33:35.283737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.283885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.283893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.283906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.283913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.283925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.283937 | orchestrator | 2025-05-14 02:33:35.283946 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-14 02:33:35.283953 | orchestrator | Wednesday 14 May 2025 02:26:48 +0000 (0:00:05.122) 0:00:38.619 ********* 2025-05-14 02:33:35.283959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.283985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.285517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.285567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.285575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.285582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.285589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.285595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.285602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.285613 | orchestrator | 2025-05-14 02:33:35.285619 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-14 02:33:35.285626 | orchestrator | Wednesday 14 May 2025 02:26:52 +0000 (0:00:03.722) 0:00:42.341 ********* 2025-05-14 02:33:35.285664 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-14 02:33:35.285672 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-14 02:33:35.285682 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-14 02:33:35.285688 | orchestrator | 2025-05-14 02:33:35.285694 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-14 02:33:35.285700 | orchestrator | Wednesday 14 May 2025 02:26:54 +0000 (0:00:02.556) 0:00:44.897 ********* 2025-05-14 02:33:35.285706 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-14 02:33:35.285713 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-14 02:33:35.285719 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-14 02:33:35.285725 | orchestrator | 2025-05-14 02:33:35.285731 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-14 02:33:35.285738 | orchestrator | Wednesday 14 May 2025 02:26:57 +0000 (0:00:02.695) 0:00:47.593 ********* 2025-05-14 02:33:35.285745 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.285752 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.285758 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.285765 | orchestrator | 2025-05-14 02:33:35.285771 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-14 02:33:35.285777 | orchestrator | Wednesday 14 May 2025 02:26:58 +0000 (0:00:01.139) 0:00:48.733 ********* 2025-05-14 02:33:35.285783 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-14 02:33:35.285791 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-14 02:33:35.285797 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-14 02:33:35.285803 | orchestrator | 2025-05-14 02:33:35.285809 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-14 02:33:35.285839 | orchestrator | Wednesday 14 May 2025 02:27:03 +0000 (0:00:05.043) 0:00:53.776 ********* 2025-05-14 02:33:35.285845 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-14 02:33:35.285851 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-14 02:33:35.285857 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-14 02:33:35.285864 | orchestrator | 2025-05-14 02:33:35.285870 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-14 02:33:35.285876 | orchestrator | Wednesday 14 May 2025 02:27:05 +0000 (0:00:02.131) 0:00:55.908 ********* 2025-05-14 02:33:35.285882 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-14 02:33:35.285888 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-14 02:33:35.285894 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-14 02:33:35.285901 | orchestrator | 2025-05-14 02:33:35.285907 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-14 02:33:35.285928 | orchestrator | Wednesday 14 May 2025 02:27:07 +0000 (0:00:02.020) 0:00:57.928 ********* 2025-05-14 02:33:35.285935 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-14 02:33:35.285946 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-14 02:33:35.285952 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-14 02:33:35.285958 | orchestrator | 2025-05-14 02:33:35.285963 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-14 02:33:35.285969 | orchestrator | Wednesday 14 May 2025 02:27:10 +0000 (0:00:02.357) 0:01:00.286 ********* 2025-05-14 02:33:35.285975 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.285981 | orchestrator | 2025-05-14 02:33:35.285987 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-14 02:33:35.286106 | orchestrator | Wednesday 14 May 2025 02:27:11 +0000 (0:00:00.866) 0:01:01.152 ********* 2025-05-14 02:33:35.286118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.286265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.286281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.286293 | orchestrator | 2025-05-14 02:33:35.286302 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-14 02:33:35.286308 | orchestrator | Wednesday 14 May 2025 02:27:14 +0000 (0:00:03.532) 0:01:04.685 ********* 2025-05-14 02:33:35.286315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-14 02:33:35.286321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:33:35.286336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.286348 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.286359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-14 02:33:35.286371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:33:35.286387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.286394 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.286399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-14 02:33:35.286406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:33:35.286413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.286424 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.286430 | orchestrator | 2025-05-14 02:33:35.286436 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-14 02:33:35.286471 | orchestrator | Wednesday 14 May 2025 02:27:15 +0000 (0:00:00.845) 0:01:05.531 ********* 2025-05-14 02:33:35.286478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-14 02:33:35.286484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:33:35.286494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.286505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-14 02:33:35.286512 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.286518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:33:35.286529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.286535 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.286542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-14 02:33:35.286548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:33:35.286555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:33:35.286561 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.286568 | orchestrator | 2025-05-14 02:33:35.286574 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-14 02:33:35.286583 | orchestrator | Wednesday 14 May 2025 02:27:17 +0000 (0:00:01.567) 0:01:07.098 ********* 2025-05-14 02:33:35.286589 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-14 02:33:35.286594 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-14 02:33:35.286604 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-14 02:33:35.286731 | orchestrator | 2025-05-14 02:33:35.286738 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-14 02:33:35.286745 | orchestrator | Wednesday 14 May 2025 02:27:19 +0000 (0:00:02.173) 0:01:09.271 ********* 2025-05-14 02:33:35.286751 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-14 02:33:35.286758 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-14 02:33:35.286764 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-14 02:33:35.286771 | orchestrator | 2025-05-14 02:33:35.286777 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-14 02:33:35.286815 | orchestrator | Wednesday 14 May 2025 02:27:21 +0000 (0:00:02.287) 0:01:11.559 ********* 2025-05-14 02:33:35.286822 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:33:35.286828 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:33:35.286834 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:33:35.286841 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:33:35.286847 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.286853 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:33:35.286859 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.286864 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:33:35.286870 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.286876 | orchestrator | 2025-05-14 02:33:35.286882 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-14 02:33:35.286888 | orchestrator | Wednesday 14 May 2025 02:27:23 +0000 (0:00:01.866) 0:01:13.426 ********* 2025-05-14 02:33:35.286894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:33:35.286951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.286958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.286965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.286974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.286981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:33:35.286993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8', '__omit_place_holder__ded16fc23c657786a28a4920fcf387a868c828a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:33:35.286999 | orchestrator | 2025-05-14 02:33:35.287005 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-14 02:33:35.287011 | orchestrator | Wednesday 14 May 2025 02:27:26 +0000 (0:00:03.331) 0:01:16.757 ********* 2025-05-14 02:33:35.287017 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.287023 | orchestrator | 2025-05-14 02:33:35.287029 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-14 02:33:35.287035 | orchestrator | Wednesday 14 May 2025 02:27:27 +0000 (0:00:00.707) 0:01:17.465 ********* 2025-05-14 02:33:35.287092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-14 02:33:35.287109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.287116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-14 02:33:35.287135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-14 02:33:35.287149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.287161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.287178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287206 | orchestrator | 2025-05-14 02:33:35.287213 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-14 02:33:35.287219 | orchestrator | Wednesday 14 May 2025 02:27:31 +0000 (0:00:03.604) 0:01:21.069 ********* 2025-05-14 02:33:35.287225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-14 02:33:35.287232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.287267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287289 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.287299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-14 02:33:35.287305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.287312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287324 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.287331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-14 02:33:35.287346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.287356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287369 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.287375 | orchestrator | 2025-05-14 02:33:35.287381 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-14 02:33:35.287387 | orchestrator | Wednesday 14 May 2025 02:27:32 +0000 (0:00:01.053) 0:01:22.122 ********* 2025-05-14 02:33:35.287394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:33:35.287400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:33:35.287439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:33:35.287446 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.287452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:33:35.287459 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.287465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:33:35.287472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:33:35.287478 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.287490 | orchestrator | 2025-05-14 02:33:35.287496 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-14 02:33:35.287502 | orchestrator | Wednesday 14 May 2025 02:27:33 +0000 (0:00:01.395) 0:01:23.518 ********* 2025-05-14 02:33:35.287510 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.287545 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.287552 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.287558 | orchestrator | 2025-05-14 02:33:35.287565 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-14 02:33:35.287571 | orchestrator | Wednesday 14 May 2025 02:27:35 +0000 (0:00:01.549) 0:01:25.067 ********* 2025-05-14 02:33:35.287577 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.287584 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.287590 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.287596 | orchestrator | 2025-05-14 02:33:35.287602 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-14 02:33:35.287609 | orchestrator | Wednesday 14 May 2025 02:27:37 +0000 (0:00:02.492) 0:01:27.560 ********* 2025-05-14 02:33:35.287615 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.287622 | orchestrator | 2025-05-14 02:33:35.287699 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-14 02:33:35.287707 | orchestrator | Wednesday 14 May 2025 02:27:38 +0000 (0:00:01.004) 0:01:28.564 ********* 2025-05-14 02:33:35.287726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.287735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.287762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.287791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287808 | orchestrator | 2025-05-14 02:33:35.287815 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-14 02:33:35.287821 | orchestrator | Wednesday 14 May 2025 02:27:44 +0000 (0:00:06.060) 0:01:34.625 ********* 2025-05-14 02:33:35.287829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.287847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287862 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.287868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.287881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287894 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.287904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.287916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.287931 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.287941 | orchestrator | 2025-05-14 02:33:35.287948 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-14 02:33:35.287954 | orchestrator | Wednesday 14 May 2025 02:27:45 +0000 (0:00:01.053) 0:01:35.679 ********* 2025-05-14 02:33:35.287961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:33:35.287967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:33:35.287975 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.287989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:33:35.287996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:33:35.288001 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.288008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:33:35.288014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:33:35.288020 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.288028 | orchestrator | 2025-05-14 02:33:35.288034 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-14 02:33:35.288041 | orchestrator | Wednesday 14 May 2025 02:27:46 +0000 (0:00:01.262) 0:01:36.942 ********* 2025-05-14 02:33:35.288046 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.288054 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.288060 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.288066 | orchestrator | 2025-05-14 02:33:35.288072 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-14 02:33:35.288078 | orchestrator | Wednesday 14 May 2025 02:27:48 +0000 (0:00:01.670) 0:01:38.612 ********* 2025-05-14 02:33:35.288097 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.288104 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.288110 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.288117 | orchestrator | 2025-05-14 02:33:35.288123 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-14 02:33:35.288129 | orchestrator | Wednesday 14 May 2025 02:27:51 +0000 (0:00:02.485) 0:01:41.097 ********* 2025-05-14 02:33:35.288135 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.288141 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.288147 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.288153 | orchestrator | 2025-05-14 02:33:35.288165 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-14 02:33:35.288171 | orchestrator | Wednesday 14 May 2025 02:27:51 +0000 (0:00:00.322) 0:01:41.420 ********* 2025-05-14 02:33:35.288177 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.288183 | orchestrator | 2025-05-14 02:33:35.288189 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-14 02:33:35.288199 | orchestrator | Wednesday 14 May 2025 02:27:53 +0000 (0:00:01.543) 0:01:42.963 ********* 2025-05-14 02:33:35.288206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-14 02:33:35.288219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-14 02:33:35.288225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-14 02:33:35.288232 | orchestrator | 2025-05-14 02:33:35.288237 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-14 02:33:35.288243 | orchestrator | Wednesday 14 May 2025 02:27:56 +0000 (0:00:03.118) 0:01:46.082 ********* 2025-05-14 02:33:35.288249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-14 02:33:35.288256 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.288270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-14 02:33:35.288282 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.288288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-14 02:33:35.288294 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.288300 | orchestrator | 2025-05-14 02:33:35.288306 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-14 02:33:35.288312 | orchestrator | Wednesday 14 May 2025 02:27:57 +0000 (0:00:01.839) 0:01:47.922 ********* 2025-05-14 02:33:35.288318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:33:35.288325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:33:35.288333 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.288339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:33:35.288346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:33:35.288352 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.288360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:33:35.288377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:33:35.288389 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.288396 | orchestrator | 2025-05-14 02:33:35.288406 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-14 02:33:35.288412 | orchestrator | Wednesday 14 May 2025 02:28:00 +0000 (0:00:02.274) 0:01:50.196 ********* 2025-05-14 02:33:35.288418 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.288425 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.288431 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.288437 | orchestrator | 2025-05-14 02:33:35.288444 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-14 02:33:35.288450 | orchestrator | Wednesday 14 May 2025 02:28:00 +0000 (0:00:00.678) 0:01:50.875 ********* 2025-05-14 02:33:35.288456 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.288462 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.288468 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.288474 | orchestrator | 2025-05-14 02:33:35.288481 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-14 02:33:35.288565 | orchestrator | Wednesday 14 May 2025 02:28:01 +0000 (0:00:01.001) 0:01:51.876 ********* 2025-05-14 02:33:35.288573 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.288579 | orchestrator | 2025-05-14 02:33:35.288585 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-14 02:33:35.288591 | orchestrator | Wednesday 14 May 2025 02:28:02 +0000 (0:00:00.942) 0:01:52.819 ********* 2025-05-14 02:33:35.288597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.288606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.288668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.288709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288729 | orchestrator | 2025-05-14 02:33:35.288736 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-14 02:33:35.288742 | orchestrator | Wednesday 14 May 2025 02:28:06 +0000 (0:00:03.917) 0:01:56.736 ********* 2025-05-14 02:33:35.288749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.288760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288802 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.288810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.288815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288847 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.288853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.288860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.288884 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.288890 | orchestrator | 2025-05-14 02:33:35.288895 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-14 02:33:35.288901 | orchestrator | Wednesday 14 May 2025 02:28:07 +0000 (0:00:01.174) 0:01:57.911 ********* 2025-05-14 02:33:35.288908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:33:35.288919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:33:35.288926 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.288936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:33:35.288943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:33:35.288949 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.288956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:33:35.288962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:33:35.288968 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.288974 | orchestrator | 2025-05-14 02:33:35.288981 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-14 02:33:35.288987 | orchestrator | Wednesday 14 May 2025 02:28:09 +0000 (0:00:01.110) 0:01:59.021 ********* 2025-05-14 02:33:35.288993 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.288999 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.289005 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.289012 | orchestrator | 2025-05-14 02:33:35.289018 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-14 02:33:35.289024 | orchestrator | Wednesday 14 May 2025 02:28:10 +0000 (0:00:01.551) 0:02:00.573 ********* 2025-05-14 02:33:35.289030 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.289036 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.289043 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.289050 | orchestrator | 2025-05-14 02:33:35.289056 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-14 02:33:35.289067 | orchestrator | Wednesday 14 May 2025 02:28:13 +0000 (0:00:02.449) 0:02:03.022 ********* 2025-05-14 02:33:35.289073 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.289079 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.289085 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.289091 | orchestrator | 2025-05-14 02:33:35.289097 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-14 02:33:35.289104 | orchestrator | Wednesday 14 May 2025 02:28:13 +0000 (0:00:00.436) 0:02:03.459 ********* 2025-05-14 02:33:35.289110 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.289116 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.289122 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.289129 | orchestrator | 2025-05-14 02:33:35.289135 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-14 02:33:35.289140 | orchestrator | Wednesday 14 May 2025 02:28:14 +0000 (0:00:00.541) 0:02:04.001 ********* 2025-05-14 02:33:35.289147 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.289153 | orchestrator | 2025-05-14 02:33:35.289160 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-14 02:33:35.289166 | orchestrator | Wednesday 14 May 2025 02:28:15 +0000 (0:00:01.052) 0:02:05.053 ********* 2025-05-14 02:33:35.289174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:33:35.289187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:33:35.289197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:33:35.289249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:33:35.289255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:33:35.289304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:33:35.289316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289393 | orchestrator | 2025-05-14 02:33:35.289404 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-14 02:33:35.289411 | orchestrator | Wednesday 14 May 2025 02:28:20 +0000 (0:00:05.327) 0:02:10.381 ********* 2025-05-14 02:33:35.289420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:33:35.289431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:33:35.289438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289823 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.289830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:33:35.289837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:33:35.289843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:33:35.289886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289900 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.289906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:33:35.289912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.289956 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.289963 | orchestrator | 2025-05-14 02:33:35.289969 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-14 02:33:35.289976 | orchestrator | Wednesday 14 May 2025 02:28:21 +0000 (0:00:01.168) 0:02:11.549 ********* 2025-05-14 02:33:35.289983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:33:35.289990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:33:35.289996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:33:35.290003 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.290009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:33:35.290043 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.290050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:33:35.290055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:33:35.290062 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.290067 | orchestrator | 2025-05-14 02:33:35.290073 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-14 02:33:35.290078 | orchestrator | Wednesday 14 May 2025 02:28:23 +0000 (0:00:01.742) 0:02:13.291 ********* 2025-05-14 02:33:35.290085 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.290091 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.290097 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.290108 | orchestrator | 2025-05-14 02:33:35.290114 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-14 02:33:35.290119 | orchestrator | Wednesday 14 May 2025 02:28:24 +0000 (0:00:01.277) 0:02:14.568 ********* 2025-05-14 02:33:35.290126 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.290132 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.290138 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.290144 | orchestrator | 2025-05-14 02:33:35.290150 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-14 02:33:35.290156 | orchestrator | Wednesday 14 May 2025 02:28:26 +0000 (0:00:02.318) 0:02:16.887 ********* 2025-05-14 02:33:35.290162 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.290168 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.290174 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.290180 | orchestrator | 2025-05-14 02:33:35.290186 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-14 02:33:35.290196 | orchestrator | Wednesday 14 May 2025 02:28:27 +0000 (0:00:00.482) 0:02:17.370 ********* 2025-05-14 02:33:35.290202 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.290208 | orchestrator | 2025-05-14 02:33:35.290214 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-14 02:33:35.290223 | orchestrator | Wednesday 14 May 2025 02:28:28 +0000 (0:00:01.091) 0:02:18.461 ********* 2025-05-14 02:33:35.290231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:33:35.290239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.290259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:33:35.290266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.290287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:33:35.290295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.290305 | orchestrator | 2025-05-14 02:33:35.290311 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-14 02:33:35.290316 | orchestrator | Wednesday 14 May 2025 02:28:33 +0000 (0:00:05.213) 0:02:23.675 ********* 2025-05-14 02:33:35.290330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:33:35.290337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.290351 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.290365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:33:35.290373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.290379 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.290386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:33:35.290404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.290410 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.290416 | orchestrator | 2025-05-14 02:33:35.290423 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-14 02:33:35.290429 | orchestrator | Wednesday 14 May 2025 02:28:38 +0000 (0:00:04.861) 0:02:28.536 ********* 2025-05-14 02:33:35.290435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:33:35.290445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:33:35.290452 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.290458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:33:35.290468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:33:35.290474 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.290484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:33:35.290490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:33:35.290496 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.290502 | orchestrator | 2025-05-14 02:33:35.290508 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-14 02:33:35.290514 | orchestrator | Wednesday 14 May 2025 02:28:42 +0000 (0:00:04.185) 0:02:32.722 ********* 2025-05-14 02:33:35.290520 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.290526 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.290532 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.290538 | orchestrator | 2025-05-14 02:33:35.290544 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-14 02:33:35.290550 | orchestrator | Wednesday 14 May 2025 02:28:44 +0000 (0:00:01.277) 0:02:33.999 ********* 2025-05-14 02:33:35.290556 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.290562 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.290568 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.290578 | orchestrator | 2025-05-14 02:33:35.290584 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-14 02:33:35.290590 | orchestrator | Wednesday 14 May 2025 02:28:46 +0000 (0:00:02.241) 0:02:36.241 ********* 2025-05-14 02:33:35.290596 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.290602 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.290608 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.290614 | orchestrator | 2025-05-14 02:33:35.290620 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-14 02:33:35.290626 | orchestrator | Wednesday 14 May 2025 02:28:46 +0000 (0:00:00.406) 0:02:36.648 ********* 2025-05-14 02:33:35.290632 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.290673 | orchestrator | 2025-05-14 02:33:35.290680 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-14 02:33:35.290686 | orchestrator | Wednesday 14 May 2025 02:28:47 +0000 (0:00:00.946) 0:02:37.595 ********* 2025-05-14 02:33:35.290692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:33:35.290700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:33:35.290715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:33:35.290721 | orchestrator | 2025-05-14 02:33:35.290727 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-14 02:33:35.290733 | orchestrator | Wednesday 14 May 2025 02:28:51 +0000 (0:00:04.280) 0:02:41.876 ********* 2025-05-14 02:33:35.290739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:33:35.290749 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.290756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:33:35.290762 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.290769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:33:35.290776 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.290782 | orchestrator | 2025-05-14 02:33:35.290788 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-14 02:33:35.290794 | orchestrator | Wednesday 14 May 2025 02:28:52 +0000 (0:00:00.868) 0:02:42.744 ********* 2025-05-14 02:33:35.290800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:33:35.290807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:33:35.290814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:33:35.290820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:33:35.290826 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.290832 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.290838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:33:35.290847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:33:35.290854 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.290860 | orchestrator | 2025-05-14 02:33:35.290866 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-14 02:33:35.290872 | orchestrator | Wednesday 14 May 2025 02:28:53 +0000 (0:00:01.047) 0:02:43.791 ********* 2025-05-14 02:33:35.290878 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.290887 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.290894 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.290900 | orchestrator | 2025-05-14 02:33:35.290906 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-14 02:33:35.290917 | orchestrator | Wednesday 14 May 2025 02:28:55 +0000 (0:00:01.263) 0:02:45.055 ********* 2025-05-14 02:33:35.290923 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.290929 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.290935 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.290940 | orchestrator | 2025-05-14 02:33:35.290946 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-14 02:33:35.290952 | orchestrator | Wednesday 14 May 2025 02:28:57 +0000 (0:00:02.063) 0:02:47.119 ********* 2025-05-14 02:33:35.290958 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.290965 | orchestrator | 2025-05-14 02:33:35.290971 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-14 02:33:35.290977 | orchestrator | Wednesday 14 May 2025 02:28:58 +0000 (0:00:01.126) 0:02:48.245 ********* 2025-05-14 02:33:35.290984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.290993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.291000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.291013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.291028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.291034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.291041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.291047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.291053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.291065 | orchestrator | 2025-05-14 02:33:35.291075 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-14 02:33:35.291081 | orchestrator | Wednesday 14 May 2025 02:29:05 +0000 (0:00:07.295) 0:02:55.541 ********* 2025-05-14 02:33:35.291090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.291097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.291103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.291109 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.291116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.291126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.291140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.291147 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.291153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.291160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.291167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.291173 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.291180 | orchestrator | 2025-05-14 02:33:35.291186 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-14 02:33:35.291192 | orchestrator | Wednesday 14 May 2025 02:29:06 +0000 (0:00:00.851) 0:02:56.393 ********* 2025-05-14 02:33:35.291203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291233 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.291243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291268 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.291274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:33:35.291299 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.291305 | orchestrator | 2025-05-14 02:33:35.291311 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-14 02:33:35.291318 | orchestrator | Wednesday 14 May 2025 02:29:07 +0000 (0:00:01.533) 0:02:57.926 ********* 2025-05-14 02:33:35.291324 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.291330 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.291337 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.291343 | orchestrator | 2025-05-14 02:33:35.291349 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-14 02:33:35.291356 | orchestrator | Wednesday 14 May 2025 02:29:09 +0000 (0:00:01.395) 0:02:59.322 ********* 2025-05-14 02:33:35.291362 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.291368 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.291375 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.291387 | orchestrator | 2025-05-14 02:33:35.291393 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-14 02:33:35.291400 | orchestrator | Wednesday 14 May 2025 02:29:11 +0000 (0:00:02.389) 0:03:01.712 ********* 2025-05-14 02:33:35.291406 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.291412 | orchestrator | 2025-05-14 02:33:35.291419 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-14 02:33:35.291424 | orchestrator | Wednesday 14 May 2025 02:29:12 +0000 (0:00:01.082) 0:03:02.794 ********* 2025-05-14 02:33:35.292119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:33:35.292154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:33:35.292200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:33:35.292213 | orchestrator | 2025-05-14 02:33:35.292224 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-14 02:33:35.292236 | orchestrator | Wednesday 14 May 2025 02:29:16 +0000 (0:00:03.825) 0:03:06.620 ********* 2025-05-14 02:33:35.292249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:33:35.292267 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.292299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:33:35.292308 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.292315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:33:35.292326 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.292332 | orchestrator | 2025-05-14 02:33:35.292348 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-14 02:33:35.292357 | orchestrator | Wednesday 14 May 2025 02:29:17 +0000 (0:00:00.913) 0:03:07.534 ********* 2025-05-14 02:33:35.292364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:33:35.292371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:33:35.292380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:33:35.292392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:33:35.292404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-14 02:33:35.292414 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.292420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:33:35.292431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:33:35.292437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:33:35.292444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:33:35.292528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-14 02:33:35.292541 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.292547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:33:35.292553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:33:35.292590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:33:35.292601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:33:35.292607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-14 02:33:35.292613 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.292619 | orchestrator | 2025-05-14 02:33:35.292625 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-14 02:33:35.292657 | orchestrator | Wednesday 14 May 2025 02:29:18 +0000 (0:00:01.211) 0:03:08.745 ********* 2025-05-14 02:33:35.292663 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.292670 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.292676 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.292683 | orchestrator | 2025-05-14 02:33:35.292689 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-14 02:33:35.292695 | orchestrator | Wednesday 14 May 2025 02:29:20 +0000 (0:00:01.423) 0:03:10.169 ********* 2025-05-14 02:33:35.292701 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.292712 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.292718 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.292724 | orchestrator | 2025-05-14 02:33:35.292730 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-14 02:33:35.292736 | orchestrator | Wednesday 14 May 2025 02:29:22 +0000 (0:00:02.045) 0:03:12.214 ********* 2025-05-14 02:33:35.292743 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.292749 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.292755 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.292760 | orchestrator | 2025-05-14 02:33:35.292766 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-14 02:33:35.292772 | orchestrator | Wednesday 14 May 2025 02:29:22 +0000 (0:00:00.401) 0:03:12.615 ********* 2025-05-14 02:33:35.292778 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.292783 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.292790 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.292796 | orchestrator | 2025-05-14 02:33:35.292802 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-14 02:33:35.292808 | orchestrator | Wednesday 14 May 2025 02:29:22 +0000 (0:00:00.243) 0:03:12.858 ********* 2025-05-14 02:33:35.292814 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.292819 | orchestrator | 2025-05-14 02:33:35.292825 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-14 02:33:35.292832 | orchestrator | Wednesday 14 May 2025 02:29:24 +0000 (0:00:01.234) 0:03:14.093 ********* 2025-05-14 02:33:35.292839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:33:35.292851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:33:35.292862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:33:35.292874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:33:35.292881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:33:35.292887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:33:35.292894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:33:35.292905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:33:35.292914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:33:35.292925 | orchestrator | 2025-05-14 02:33:35.292931 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-14 02:33:35.292937 | orchestrator | Wednesday 14 May 2025 02:29:28 +0000 (0:00:04.448) 0:03:18.541 ********* 2025-05-14 02:33:35.292944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:33:35.292950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:33:35.292956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:33:35.292963 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.292982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:33:35.293008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:33:35.293016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:33:35.293022 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.293029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:33:35.293035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:33:35.293042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:33:35.293049 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.293059 | orchestrator | 2025-05-14 02:33:35.293065 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-14 02:33:35.293071 | orchestrator | Wednesday 14 May 2025 02:29:29 +0000 (0:00:00.891) 0:03:19.433 ********* 2025-05-14 02:33:35.293082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:33:35.293092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:33:35.293099 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.293105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:33:35.293112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:33:35.293118 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.293125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:33:35.293131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:33:35.293137 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.293143 | orchestrator | 2025-05-14 02:33:35.293149 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-14 02:33:35.293168 | orchestrator | Wednesday 14 May 2025 02:29:30 +0000 (0:00:01.397) 0:03:20.830 ********* 2025-05-14 02:33:35.293173 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.293179 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.293185 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.293190 | orchestrator | 2025-05-14 02:33:35.293196 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-14 02:33:35.293202 | orchestrator | Wednesday 14 May 2025 02:29:32 +0000 (0:00:01.505) 0:03:22.336 ********* 2025-05-14 02:33:35.293208 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.293214 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.293221 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.293227 | orchestrator | 2025-05-14 02:33:35.293233 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-14 02:33:35.293239 | orchestrator | Wednesday 14 May 2025 02:29:34 +0000 (0:00:02.469) 0:03:24.806 ********* 2025-05-14 02:33:35.293245 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.293261 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.293267 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.293274 | orchestrator | 2025-05-14 02:33:35.293280 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-14 02:33:35.293286 | orchestrator | Wednesday 14 May 2025 02:29:35 +0000 (0:00:00.301) 0:03:25.107 ********* 2025-05-14 02:33:35.293292 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.293297 | orchestrator | 2025-05-14 02:33:35.293303 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-14 02:33:35.293309 | orchestrator | Wednesday 14 May 2025 02:29:36 +0000 (0:00:01.418) 0:03:26.526 ********* 2025-05-14 02:33:35.293321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:33:35.293344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:33:35.293357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:33:35.293375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293381 | orchestrator | 2025-05-14 02:33:35.293387 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-14 02:33:35.293393 | orchestrator | Wednesday 14 May 2025 02:29:41 +0000 (0:00:04.843) 0:03:31.369 ********* 2025-05-14 02:33:35.293408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:33:35.293414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293420 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.293426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:33:35.293436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293443 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.293459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:33:35.293469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293476 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.293482 | orchestrator | 2025-05-14 02:33:35.293488 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-14 02:33:35.293494 | orchestrator | Wednesday 14 May 2025 02:29:42 +0000 (0:00:00.858) 0:03:32.228 ********* 2025-05-14 02:33:35.293500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:33:35.293507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:33:35.293513 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.293519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:33:35.293525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:33:35.293531 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.293537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:33:35.293550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:33:35.293556 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.293562 | orchestrator | 2025-05-14 02:33:35.293568 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-14 02:33:35.293574 | orchestrator | Wednesday 14 May 2025 02:29:43 +0000 (0:00:01.504) 0:03:33.733 ********* 2025-05-14 02:33:35.293580 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.293586 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.293592 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.293598 | orchestrator | 2025-05-14 02:33:35.293603 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-14 02:33:35.293609 | orchestrator | Wednesday 14 May 2025 02:29:45 +0000 (0:00:01.350) 0:03:35.084 ********* 2025-05-14 02:33:35.293615 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.293621 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.293627 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.293633 | orchestrator | 2025-05-14 02:33:35.293656 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-14 02:33:35.293662 | orchestrator | Wednesday 14 May 2025 02:29:47 +0000 (0:00:02.309) 0:03:37.393 ********* 2025-05-14 02:33:35.293667 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.293683 | orchestrator | 2025-05-14 02:33:35.293689 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-14 02:33:35.293695 | orchestrator | Wednesday 14 May 2025 02:29:48 +0000 (0:00:01.254) 0:03:38.648 ********* 2025-05-14 02:33:35.293713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-14 02:33:35.293723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-14 02:33:35.293755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-14 02:33:35.293791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293809 | orchestrator | 2025-05-14 02:33:35.293815 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-14 02:33:35.293821 | orchestrator | Wednesday 14 May 2025 02:29:53 +0000 (0:00:04.327) 0:03:42.976 ********* 2025-05-14 02:33:35.293831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-14 02:33:35.293840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293864 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.293870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-14 02:33:35.293877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293929 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.293935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-14 02:33:35.293941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.293961 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.293967 | orchestrator | 2025-05-14 02:33:35.293973 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-14 02:33:35.293979 | orchestrator | Wednesday 14 May 2025 02:29:53 +0000 (0:00:00.967) 0:03:43.943 ********* 2025-05-14 02:33:35.293985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:33:35.294002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:33:35.294009 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.294048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:33:35.294060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:33:35.294067 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.294072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:33:35.294079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:33:35.294084 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.294090 | orchestrator | 2025-05-14 02:33:35.294096 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-14 02:33:35.294102 | orchestrator | Wednesday 14 May 2025 02:29:55 +0000 (0:00:01.294) 0:03:45.237 ********* 2025-05-14 02:33:35.294109 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.294115 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.294120 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.294126 | orchestrator | 2025-05-14 02:33:35.294132 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-14 02:33:35.294138 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:01.287) 0:03:46.525 ********* 2025-05-14 02:33:35.294144 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.294165 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.294171 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.294177 | orchestrator | 2025-05-14 02:33:35.294183 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-14 02:33:35.294189 | orchestrator | Wednesday 14 May 2025 02:29:58 +0000 (0:00:02.084) 0:03:48.609 ********* 2025-05-14 02:33:35.294195 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.294201 | orchestrator | 2025-05-14 02:33:35.294207 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-14 02:33:35.294213 | orchestrator | Wednesday 14 May 2025 02:30:00 +0000 (0:00:01.494) 0:03:50.104 ********* 2025-05-14 02:33:35.294219 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:33:35.294225 | orchestrator | 2025-05-14 02:33:35.294231 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-14 02:33:35.294237 | orchestrator | Wednesday 14 May 2025 02:30:03 +0000 (0:00:03.324) 0:03:53.429 ********* 2025-05-14 02:33:35.294245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:33:35.294275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:33:35.294282 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.294288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:33:35.294295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:33:35.294301 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.294315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:33:35.294327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:33:35.294333 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.294342 | orchestrator | 2025-05-14 02:33:35.294347 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-14 02:33:35.294354 | orchestrator | Wednesday 14 May 2025 02:30:07 +0000 (0:00:04.419) 0:03:57.849 ********* 2025-05-14 02:33:35.294360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:33:35.294393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:33:35.294400 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.294410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:33:35.294417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:33:35.294423 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.294438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:33:35.294450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:33:35.294456 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.294462 | orchestrator | 2025-05-14 02:33:35.294467 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-14 02:33:35.294474 | orchestrator | Wednesday 14 May 2025 02:30:10 +0000 (0:00:02.736) 0:04:00.585 ********* 2025-05-14 02:33:35.294480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:33:35.294487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:33:35.294493 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.294499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:33:35.294512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:33:35.294519 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.294535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:33:35.294544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:33:35.294551 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.294556 | orchestrator | 2025-05-14 02:33:35.294562 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-14 02:33:35.294568 | orchestrator | Wednesday 14 May 2025 02:30:14 +0000 (0:00:03.394) 0:04:03.980 ********* 2025-05-14 02:33:35.294574 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.294580 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.294585 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.294591 | orchestrator | 2025-05-14 02:33:35.294597 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-14 02:33:35.294603 | orchestrator | Wednesday 14 May 2025 02:30:16 +0000 (0:00:02.523) 0:04:06.504 ********* 2025-05-14 02:33:35.294609 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.294615 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.294621 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.294627 | orchestrator | 2025-05-14 02:33:35.294632 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-14 02:33:35.294651 | orchestrator | Wednesday 14 May 2025 02:30:18 +0000 (0:00:01.751) 0:04:08.256 ********* 2025-05-14 02:33:35.294658 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.294663 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.294680 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.294687 | orchestrator | 2025-05-14 02:33:35.294692 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-14 02:33:35.294699 | orchestrator | Wednesday 14 May 2025 02:30:18 +0000 (0:00:00.531) 0:04:08.787 ********* 2025-05-14 02:33:35.294705 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.294711 | orchestrator | 2025-05-14 02:33:35.294717 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-14 02:33:35.294728 | orchestrator | Wednesday 14 May 2025 02:30:20 +0000 (0:00:01.587) 0:04:10.375 ********* 2025-05-14 02:33:35.294734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-14 02:33:35.294741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-14 02:33:35.294765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-14 02:33:35.294772 | orchestrator | 2025-05-14 02:33:35.294777 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-14 02:33:35.294784 | orchestrator | Wednesday 14 May 2025 02:30:22 +0000 (0:00:01.882) 0:04:12.257 ********* 2025-05-14 02:33:35.294790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-14 02:33:35.294796 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.294811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-14 02:33:35.294821 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.294827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-14 02:33:35.294833 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.294839 | orchestrator | 2025-05-14 02:33:35.294845 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-14 02:33:35.294851 | orchestrator | Wednesday 14 May 2025 02:30:22 +0000 (0:00:00.393) 0:04:12.651 ********* 2025-05-14 02:33:35.294857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-14 02:33:35.294864 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.294870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-14 02:33:35.294877 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.294882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-14 02:33:35.294889 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.294894 | orchestrator | 2025-05-14 02:33:35.294909 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-14 02:33:35.294915 | orchestrator | Wednesday 14 May 2025 02:30:23 +0000 (0:00:00.959) 0:04:13.611 ********* 2025-05-14 02:33:35.294921 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.294927 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.294933 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.294939 | orchestrator | 2025-05-14 02:33:35.294945 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-14 02:33:35.294955 | orchestrator | Wednesday 14 May 2025 02:30:24 +0000 (0:00:00.873) 0:04:14.485 ********* 2025-05-14 02:33:35.294961 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.294967 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.294973 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.294979 | orchestrator | 2025-05-14 02:33:35.294985 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-14 02:33:35.294991 | orchestrator | Wednesday 14 May 2025 02:30:26 +0000 (0:00:01.573) 0:04:16.058 ********* 2025-05-14 02:33:35.294997 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.295003 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.295009 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.295015 | orchestrator | 2025-05-14 02:33:35.295021 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-14 02:33:35.295032 | orchestrator | Wednesday 14 May 2025 02:30:26 +0000 (0:00:00.290) 0:04:16.349 ********* 2025-05-14 02:33:35.295038 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.295044 | orchestrator | 2025-05-14 02:33:35.295050 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-14 02:33:35.295056 | orchestrator | Wednesday 14 May 2025 02:30:28 +0000 (0:00:01.627) 0:04:17.976 ********* 2025-05-14 02:33:35.295063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:33:35.295070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:33:35.295119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.295177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.295195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:33:35.295207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.295241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.295254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:33:35.295282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:33:35.295350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:33:35.295432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.295492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.295512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.295532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.295557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.295586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.295609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.295616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.295629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295723 | orchestrator | 2025-05-14 02:33:35.295731 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-14 02:33:35.295737 | orchestrator | Wednesday 14 May 2025 02:30:33 +0000 (0:00:05.097) 0:04:23.073 ********* 2025-05-14 02:33:35.295756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:33:35.295764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:33:35.295802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:33:35.295829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/loca2025-05-14 02:33:35 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:33:35.295894 | orchestrator | 2025-05-14 02:33:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:35.295907 | orchestrator | ltime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.295913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.295943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:33:35.295962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.295982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.295988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.295998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.296015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.296025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.296060 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.296067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:33:35.296073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.296100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.296114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.296165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:33:35.296172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.296183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296200 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.296216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.296227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.296234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.296251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.296263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:33:35.296276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:33:35.296290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:33:35.296307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296313 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.296319 | orchestrator | 2025-05-14 02:33:35.296326 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-14 02:33:35.296332 | orchestrator | Wednesday 14 May 2025 02:30:34 +0000 (0:00:01.711) 0:04:24.785 ********* 2025-05-14 02:33:35.296338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:33:35.296345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:33:35.296351 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.296358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:33:35.296364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:33:35.296370 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.296377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:33:35.296392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:33:35.296398 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.296404 | orchestrator | 2025-05-14 02:33:35.296410 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-14 02:33:35.296416 | orchestrator | Wednesday 14 May 2025 02:30:36 +0000 (0:00:01.508) 0:04:26.293 ********* 2025-05-14 02:33:35.296426 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.296432 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.296438 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.296444 | orchestrator | 2025-05-14 02:33:35.296450 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-14 02:33:35.296455 | orchestrator | Wednesday 14 May 2025 02:30:37 +0000 (0:00:01.312) 0:04:27.606 ********* 2025-05-14 02:33:35.296461 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.296467 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.296473 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.296479 | orchestrator | 2025-05-14 02:33:35.296484 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-14 02:33:35.296498 | orchestrator | Wednesday 14 May 2025 02:30:39 +0000 (0:00:02.326) 0:04:29.933 ********* 2025-05-14 02:33:35.296503 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.296509 | orchestrator | 2025-05-14 02:33:35.296525 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-14 02:33:35.296531 | orchestrator | Wednesday 14 May 2025 02:30:41 +0000 (0:00:01.344) 0:04:31.277 ********* 2025-05-14 02:33:35.296538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.296544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.296551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.296557 | orchestrator | 2025-05-14 02:33:35.296563 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-14 02:33:35.296576 | orchestrator | Wednesday 14 May 2025 02:30:44 +0000 (0:00:03.282) 0:04:34.559 ********* 2025-05-14 02:33:35.296617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.296629 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.296635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.296658 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.296664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.296672 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.296678 | orchestrator | 2025-05-14 02:33:35.296683 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-14 02:33:35.296690 | orchestrator | Wednesday 14 May 2025 02:30:45 +0000 (0:00:00.734) 0:04:35.294 ********* 2025-05-14 02:33:35.296696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:33:35.296703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:33:35.296710 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.296716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:33:35.296723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:33:35.296729 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.296753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:33:35.296764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:33:35.296770 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.296776 | orchestrator | 2025-05-14 02:33:35.296786 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-14 02:33:35.296792 | orchestrator | Wednesday 14 May 2025 02:30:46 +0000 (0:00:00.976) 0:04:36.271 ********* 2025-05-14 02:33:35.296799 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.296805 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.296810 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.296816 | orchestrator | 2025-05-14 02:33:35.296823 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-14 02:33:35.296829 | orchestrator | Wednesday 14 May 2025 02:30:47 +0000 (0:00:01.538) 0:04:37.810 ********* 2025-05-14 02:33:35.296836 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.296841 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.296848 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.296854 | orchestrator | 2025-05-14 02:33:35.296860 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-14 02:33:35.296866 | orchestrator | Wednesday 14 May 2025 02:30:50 +0000 (0:00:02.524) 0:04:40.335 ********* 2025-05-14 02:33:35.296872 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.296878 | orchestrator | 2025-05-14 02:33:35.296884 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-14 02:33:35.296890 | orchestrator | Wednesday 14 May 2025 02:30:52 +0000 (0:00:01.720) 0:04:42.055 ********* 2025-05-14 02:33:35.296897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.296906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.296943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.296967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.296992 | orchestrator | 2025-05-14 02:33:35.296999 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-14 02:33:35.297005 | orchestrator | Wednesday 14 May 2025 02:30:56 +0000 (0:00:04.766) 0:04:46.822 ********* 2025-05-14 02:33:35.297012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.297018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.297024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.297035 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.297051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.297061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.297068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.297074 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.297080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.297086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.297097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.297104 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.297110 | orchestrator | 2025-05-14 02:33:35.297116 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-14 02:33:35.297131 | orchestrator | Wednesday 14 May 2025 02:30:57 +0000 (0:00:00.673) 0:04:47.496 ********* 2025-05-14 02:33:35.297137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297166 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.297172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297196 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.297202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:33:35.297232 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.297238 | orchestrator | 2025-05-14 02:33:35.297244 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-14 02:33:35.297249 | orchestrator | Wednesday 14 May 2025 02:30:58 +0000 (0:00:01.231) 0:04:48.728 ********* 2025-05-14 02:33:35.297255 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.297261 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.297267 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.297273 | orchestrator | 2025-05-14 02:33:35.297279 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-14 02:33:35.297285 | orchestrator | Wednesday 14 May 2025 02:31:00 +0000 (0:00:01.349) 0:04:50.077 ********* 2025-05-14 02:33:35.297291 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.297297 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.297303 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.297309 | orchestrator | 2025-05-14 02:33:35.297314 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-14 02:33:35.297320 | orchestrator | Wednesday 14 May 2025 02:31:02 +0000 (0:00:02.409) 0:04:52.486 ********* 2025-05-14 02:33:35.297326 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.297332 | orchestrator | 2025-05-14 02:33:35.297339 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-14 02:33:35.297345 | orchestrator | Wednesday 14 May 2025 02:31:03 +0000 (0:00:01.321) 0:04:53.808 ********* 2025-05-14 02:33:35.297351 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-14 02:33:35.297358 | orchestrator | 2025-05-14 02:33:35.297364 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-14 02:33:35.297379 | orchestrator | Wednesday 14 May 2025 02:31:05 +0000 (0:00:01.554) 0:04:55.363 ********* 2025-05-14 02:33:35.297389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-14 02:33:35.297397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-14 02:33:35.297403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-14 02:33:35.297416 | orchestrator | 2025-05-14 02:33:35.297422 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-14 02:33:35.297430 | orchestrator | Wednesday 14 May 2025 02:31:10 +0000 (0:00:05.293) 0:05:00.656 ********* 2025-05-14 02:33:35.297436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:33:35.297442 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.297449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:33:35.297455 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.297461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:33:35.297467 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.297473 | orchestrator | 2025-05-14 02:33:35.297479 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-14 02:33:35.297485 | orchestrator | Wednesday 14 May 2025 02:31:12 +0000 (0:00:01.506) 0:05:02.162 ********* 2025-05-14 02:33:35.297491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:33:35.297497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:33:35.297504 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.297519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:33:35.297528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:33:35.297534 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.297540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:33:35.297546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:33:35.297557 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.297563 | orchestrator | 2025-05-14 02:33:35.297569 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-14 02:33:35.297575 | orchestrator | Wednesday 14 May 2025 02:31:14 +0000 (0:00:02.443) 0:05:04.606 ********* 2025-05-14 02:33:35.297581 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.297587 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.297593 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.297599 | orchestrator | 2025-05-14 02:33:35.297604 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-14 02:33:35.297610 | orchestrator | Wednesday 14 May 2025 02:31:17 +0000 (0:00:03.142) 0:05:07.749 ********* 2025-05-14 02:33:35.297616 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.297623 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.297628 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.297634 | orchestrator | 2025-05-14 02:33:35.297661 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-14 02:33:35.297667 | orchestrator | Wednesday 14 May 2025 02:31:21 +0000 (0:00:03.862) 0:05:11.611 ********* 2025-05-14 02:33:35.297674 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-14 02:33:35.297680 | orchestrator | 2025-05-14 02:33:35.297686 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-14 02:33:35.297692 | orchestrator | Wednesday 14 May 2025 02:31:22 +0000 (0:00:01.296) 0:05:12.908 ********* 2025-05-14 02:33:35.297698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:33:35.297705 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.297711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:33:35.297717 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.297723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:33:35.297730 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.297736 | orchestrator | 2025-05-14 02:33:35.297742 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-14 02:33:35.297748 | orchestrator | Wednesday 14 May 2025 02:31:24 +0000 (0:00:01.761) 0:05:14.670 ********* 2025-05-14 02:33:35.297769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:33:35.297781 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.297787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:33:35.297793 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.297799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:33:35.297806 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.297811 | orchestrator | 2025-05-14 02:33:35.297818 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-14 02:33:35.297824 | orchestrator | Wednesday 14 May 2025 02:31:26 +0000 (0:00:01.904) 0:05:16.574 ********* 2025-05-14 02:33:35.297830 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.297836 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.297842 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.297848 | orchestrator | 2025-05-14 02:33:35.297854 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-14 02:33:35.297860 | orchestrator | Wednesday 14 May 2025 02:31:28 +0000 (0:00:02.055) 0:05:18.630 ********* 2025-05-14 02:33:35.297866 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.297872 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.297878 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.297884 | orchestrator | 2025-05-14 02:33:35.297890 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-14 02:33:35.297896 | orchestrator | Wednesday 14 May 2025 02:31:31 +0000 (0:00:03.047) 0:05:21.678 ********* 2025-05-14 02:33:35.297903 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.297909 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.297915 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.297921 | orchestrator | 2025-05-14 02:33:35.297927 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-14 02:33:35.297933 | orchestrator | Wednesday 14 May 2025 02:31:35 +0000 (0:00:03.399) 0:05:25.077 ********* 2025-05-14 02:33:35.297940 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-14 02:33:35.297946 | orchestrator | 2025-05-14 02:33:35.297952 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-14 02:33:35.297958 | orchestrator | Wednesday 14 May 2025 02:31:36 +0000 (0:00:01.152) 0:05:26.230 ********* 2025-05-14 02:33:35.297964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:33:35.297975 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.297981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:33:35.297987 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.298008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:33:35.298040 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.298048 | orchestrator | 2025-05-14 02:33:35.298054 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-14 02:33:35.298061 | orchestrator | Wednesday 14 May 2025 02:31:37 +0000 (0:00:01.292) 0:05:27.523 ********* 2025-05-14 02:33:35.298067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:33:35.298073 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.298079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:33:35.298086 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.298092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:33:35.298099 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.298105 | orchestrator | 2025-05-14 02:33:35.298111 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-14 02:33:35.298117 | orchestrator | Wednesday 14 May 2025 02:31:39 +0000 (0:00:01.571) 0:05:29.094 ********* 2025-05-14 02:33:35.298123 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.298129 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.298135 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.298146 | orchestrator | 2025-05-14 02:33:35.298152 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-14 02:33:35.298159 | orchestrator | Wednesday 14 May 2025 02:31:40 +0000 (0:00:01.546) 0:05:30.641 ********* 2025-05-14 02:33:35.298165 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.298171 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.298177 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.298182 | orchestrator | 2025-05-14 02:33:35.298188 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-14 02:33:35.298194 | orchestrator | Wednesday 14 May 2025 02:31:43 +0000 (0:00:02.609) 0:05:33.250 ********* 2025-05-14 02:33:35.298200 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.298207 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.298214 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.298220 | orchestrator | 2025-05-14 02:33:35.298227 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-14 02:33:35.298233 | orchestrator | Wednesday 14 May 2025 02:31:47 +0000 (0:00:04.294) 0:05:37.545 ********* 2025-05-14 02:33:35.298239 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.298245 | orchestrator | 2025-05-14 02:33:35.298252 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-14 02:33:35.298258 | orchestrator | Wednesday 14 May 2025 02:31:49 +0000 (0:00:01.681) 0:05:39.226 ********* 2025-05-14 02:33:35.298280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.298289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.298296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:33:35.298303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:33:35.298315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.298366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.298378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.298385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:33:35.298396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.298417 | orchestrator | 2025-05-14 02:33:35.298423 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-14 02:33:35.298429 | orchestrator | Wednesday 14 May 2025 02:31:53 +0000 (0:00:04.488) 0:05:43.715 ********* 2025-05-14 02:33:35.298451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.298464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:33:35.298472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.298508 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.298515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.298529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:33:35.298536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.298556 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.298577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.298584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:33:35.298594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:33:35.298608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:33:35.298614 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.298620 | orchestrator | 2025-05-14 02:33:35.298627 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-14 02:33:35.298633 | orchestrator | Wednesday 14 May 2025 02:31:54 +0000 (0:00:01.155) 0:05:44.870 ********* 2025-05-14 02:33:35.298657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:33:35.298664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:33:35.298670 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.298687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:33:35.298697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:33:35.298704 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.298710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:33:35.298716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:33:35.298727 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.298733 | orchestrator | 2025-05-14 02:33:35.298739 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-14 02:33:35.298745 | orchestrator | Wednesday 14 May 2025 02:31:56 +0000 (0:00:01.172) 0:05:46.043 ********* 2025-05-14 02:33:35.298751 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.298758 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.298764 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.298770 | orchestrator | 2025-05-14 02:33:35.298775 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-14 02:33:35.298782 | orchestrator | Wednesday 14 May 2025 02:31:57 +0000 (0:00:01.450) 0:05:47.494 ********* 2025-05-14 02:33:35.298788 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.298793 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.298799 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.298805 | orchestrator | 2025-05-14 02:33:35.298811 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-14 02:33:35.298818 | orchestrator | Wednesday 14 May 2025 02:32:00 +0000 (0:00:02.473) 0:05:49.967 ********* 2025-05-14 02:33:35.298823 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.298829 | orchestrator | 2025-05-14 02:33:35.298835 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-14 02:33:35.298841 | orchestrator | Wednesday 14 May 2025 02:32:01 +0000 (0:00:01.490) 0:05:51.458 ********* 2025-05-14 02:33:35.298848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:33:35.298856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:33:35.298873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:33:35.298891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:33:35.298898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:33:35.298906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:33:35.298913 | orchestrator | 2025-05-14 02:33:35.298919 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-14 02:33:35.298926 | orchestrator | Wednesday 14 May 2025 02:32:07 +0000 (0:00:06.322) 0:05:57.781 ********* 2025-05-14 02:33:35.298944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:33:35.298957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:33:35.298964 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.298971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:33:35.298978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:33:35.298984 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.298999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:33:35.299014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:33:35.299020 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.299027 | orchestrator | 2025-05-14 02:33:35.299033 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-14 02:33:35.299039 | orchestrator | Wednesday 14 May 2025 02:32:08 +0000 (0:00:00.955) 0:05:58.736 ********* 2025-05-14 02:33:35.299046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-14 02:33:35.299052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:33:35.299058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:33:35.299065 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.299072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-14 02:33:35.299078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:33:35.299084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:33:35.299090 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.299096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-14 02:33:35.299104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:33:35.299114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:33:35.299120 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.299126 | orchestrator | 2025-05-14 02:33:35.299132 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-14 02:33:35.299138 | orchestrator | Wednesday 14 May 2025 02:32:10 +0000 (0:00:01.343) 0:06:00.079 ********* 2025-05-14 02:33:35.299144 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.299150 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.299156 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.299162 | orchestrator | 2025-05-14 02:33:35.299168 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-14 02:33:35.299184 | orchestrator | Wednesday 14 May 2025 02:32:10 +0000 (0:00:00.711) 0:06:00.791 ********* 2025-05-14 02:33:35.299191 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.299197 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.299203 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.299209 | orchestrator | 2025-05-14 02:33:35.299215 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-14 02:33:35.299225 | orchestrator | Wednesday 14 May 2025 02:32:12 +0000 (0:00:01.724) 0:06:02.515 ********* 2025-05-14 02:33:35.299231 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.299237 | orchestrator | 2025-05-14 02:33:35.299243 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-14 02:33:35.299250 | orchestrator | Wednesday 14 May 2025 02:32:14 +0000 (0:00:01.945) 0:06:04.461 ********* 2025-05-14 02:33:35.299256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:33:35.299263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:33:35.299270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:33:35.299314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:33:35.299320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:33:35.299355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:33:35.299371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:33:35.299400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:33:35.299412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:33:35.299456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:33:35.299467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:33:35.299480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:33:35.299493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299550 | orchestrator | 2025-05-14 02:33:35.299556 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-14 02:33:35.299562 | orchestrator | Wednesday 14 May 2025 02:32:19 +0000 (0:00:05.038) 0:06:09.499 ********* 2025-05-14 02:33:35.299568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:33:35.299578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:33:35.299585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:33:35.299591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:33:35.299617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:33:35.299705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:33:35.299727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:33:35.299759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:33:35.299779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299785 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.299792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299821 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.299828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:33:35.299836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:33:35.299852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:33:35.299884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:33:35.299890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:33:35.299920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:33:35.299927 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.299933 | orchestrator | 2025-05-14 02:33:35.299939 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-14 02:33:35.299945 | orchestrator | Wednesday 14 May 2025 02:32:21 +0000 (0:00:01.484) 0:06:10.984 ********* 2025-05-14 02:33:35.299951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-14 02:33:35.299959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-14 02:33:35.299965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:33:35.299973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:33:35.299979 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.299985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-14 02:33:35.299992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-14 02:33:35.299998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:33:35.300004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:33:35.300010 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-14 02:33:35.300025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-14 02:33:35.300035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:33:35.300046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:33:35.300052 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300058 | orchestrator | 2025-05-14 02:33:35.300065 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-14 02:33:35.300071 | orchestrator | Wednesday 14 May 2025 02:32:22 +0000 (0:00:01.343) 0:06:12.327 ********* 2025-05-14 02:33:35.300077 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300083 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300090 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300096 | orchestrator | 2025-05-14 02:33:35.300102 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-14 02:33:35.300108 | orchestrator | Wednesday 14 May 2025 02:32:23 +0000 (0:00:00.964) 0:06:13.291 ********* 2025-05-14 02:33:35.300114 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300120 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300126 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300132 | orchestrator | 2025-05-14 02:33:35.300138 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-14 02:33:35.300144 | orchestrator | Wednesday 14 May 2025 02:32:25 +0000 (0:00:01.742) 0:06:15.033 ********* 2025-05-14 02:33:35.300150 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.300156 | orchestrator | 2025-05-14 02:33:35.300162 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-14 02:33:35.300168 | orchestrator | Wednesday 14 May 2025 02:32:26 +0000 (0:00:01.571) 0:06:16.605 ********* 2025-05-14 02:33:35.300174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:33:35.300181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:33:35.300203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:33:35.300210 | orchestrator | 2025-05-14 02:33:35.300217 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-14 02:33:35.300223 | orchestrator | Wednesday 14 May 2025 02:32:29 +0000 (0:00:03.091) 0:06:19.696 ********* 2025-05-14 02:33:35.300230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-14 02:33:35.300236 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-14 02:33:35.300248 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-14 02:33:35.300266 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300272 | orchestrator | 2025-05-14 02:33:35.300278 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-14 02:33:35.300284 | orchestrator | Wednesday 14 May 2025 02:32:30 +0000 (0:00:00.702) 0:06:20.399 ********* 2025-05-14 02:33:35.300294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-14 02:33:35.300300 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-14 02:33:35.300317 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-14 02:33:35.300329 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300335 | orchestrator | 2025-05-14 02:33:35.300341 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-14 02:33:35.300347 | orchestrator | Wednesday 14 May 2025 02:32:31 +0000 (0:00:00.873) 0:06:21.272 ********* 2025-05-14 02:33:35.300354 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300360 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300366 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300373 | orchestrator | 2025-05-14 02:33:35.300379 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-14 02:33:35.300385 | orchestrator | Wednesday 14 May 2025 02:32:32 +0000 (0:00:00.790) 0:06:22.063 ********* 2025-05-14 02:33:35.300391 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300397 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300403 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300410 | orchestrator | 2025-05-14 02:33:35.300416 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-14 02:33:35.300422 | orchestrator | Wednesday 14 May 2025 02:32:33 +0000 (0:00:01.888) 0:06:23.951 ********* 2025-05-14 02:33:35.300428 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:33:35.300434 | orchestrator | 2025-05-14 02:33:35.300440 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-14 02:33:35.300447 | orchestrator | Wednesday 14 May 2025 02:32:36 +0000 (0:00:02.029) 0:06:25.980 ********* 2025-05-14 02:33:35.300454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.300461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.300478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.300489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.300497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.300503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-14 02:33:35.300515 | orchestrator | 2025-05-14 02:33:35.300521 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-14 02:33:35.300528 | orchestrator | Wednesday 14 May 2025 02:32:45 +0000 (0:00:09.034) 0:06:35.014 ********* 2025-05-14 02:33:35.300537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.300547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.300554 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.300567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.300577 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.300600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-14 02:33:35.300607 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300613 | orchestrator | 2025-05-14 02:33:35.300619 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-14 02:33:35.300625 | orchestrator | Wednesday 14 May 2025 02:32:46 +0000 (0:00:00.959) 0:06:35.974 ********* 2025-05-14 02:33:35.300631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300680 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300711 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:33:35.300743 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300750 | orchestrator | 2025-05-14 02:33:35.300756 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-14 02:33:35.300762 | orchestrator | Wednesday 14 May 2025 02:32:47 +0000 (0:00:01.955) 0:06:37.930 ********* 2025-05-14 02:33:35.300768 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.300774 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.300780 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.300786 | orchestrator | 2025-05-14 02:33:35.300796 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-14 02:33:35.300802 | orchestrator | Wednesday 14 May 2025 02:32:49 +0000 (0:00:01.481) 0:06:39.411 ********* 2025-05-14 02:33:35.300808 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.300814 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.300820 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.300826 | orchestrator | 2025-05-14 02:33:35.300832 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-14 02:33:35.300842 | orchestrator | Wednesday 14 May 2025 02:32:52 +0000 (0:00:02.593) 0:06:42.004 ********* 2025-05-14 02:33:35.300848 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300855 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300861 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300868 | orchestrator | 2025-05-14 02:33:35.300874 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-14 02:33:35.300880 | orchestrator | Wednesday 14 May 2025 02:32:52 +0000 (0:00:00.318) 0:06:42.322 ********* 2025-05-14 02:33:35.300887 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300893 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300899 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300905 | orchestrator | 2025-05-14 02:33:35.300917 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-14 02:33:35.300923 | orchestrator | Wednesday 14 May 2025 02:32:52 +0000 (0:00:00.578) 0:06:42.901 ********* 2025-05-14 02:33:35.300929 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300935 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300941 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300947 | orchestrator | 2025-05-14 02:33:35.300953 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-14 02:33:35.300960 | orchestrator | Wednesday 14 May 2025 02:32:53 +0000 (0:00:00.648) 0:06:43.549 ********* 2025-05-14 02:33:35.300966 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.300972 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.300978 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.300984 | orchestrator | 2025-05-14 02:33:35.300990 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-14 02:33:35.300996 | orchestrator | Wednesday 14 May 2025 02:32:53 +0000 (0:00:00.304) 0:06:43.854 ********* 2025-05-14 02:33:35.301002 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.301008 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.301014 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.301020 | orchestrator | 2025-05-14 02:33:35.301026 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-14 02:33:35.301032 | orchestrator | Wednesday 14 May 2025 02:32:54 +0000 (0:00:00.588) 0:06:44.442 ********* 2025-05-14 02:33:35.301038 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.301043 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.301049 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.301055 | orchestrator | 2025-05-14 02:33:35.301062 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-14 02:33:35.301067 | orchestrator | Wednesday 14 May 2025 02:32:55 +0000 (0:00:00.986) 0:06:45.428 ********* 2025-05-14 02:33:35.301074 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.301080 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.301086 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.301093 | orchestrator | 2025-05-14 02:33:35.301099 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-14 02:33:35.301105 | orchestrator | Wednesday 14 May 2025 02:32:56 +0000 (0:00:00.607) 0:06:46.036 ********* 2025-05-14 02:33:35.301110 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.301116 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.301122 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.301128 | orchestrator | 2025-05-14 02:33:35.301134 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-14 02:33:35.301140 | orchestrator | Wednesday 14 May 2025 02:32:56 +0000 (0:00:00.490) 0:06:46.526 ********* 2025-05-14 02:33:35.301146 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.301152 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.301158 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.301164 | orchestrator | 2025-05-14 02:33:35.301170 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-14 02:33:35.301176 | orchestrator | Wednesday 14 May 2025 02:32:57 +0000 (0:00:01.119) 0:06:47.646 ********* 2025-05-14 02:33:35.301182 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.301187 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.301193 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.301200 | orchestrator | 2025-05-14 02:33:35.301205 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-14 02:33:35.301211 | orchestrator | Wednesday 14 May 2025 02:32:58 +0000 (0:00:01.085) 0:06:48.731 ********* 2025-05-14 02:33:35.301217 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.301223 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.301229 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.301235 | orchestrator | 2025-05-14 02:33:35.301241 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-14 02:33:35.301248 | orchestrator | Wednesday 14 May 2025 02:32:59 +0000 (0:00:00.967) 0:06:49.698 ********* 2025-05-14 02:33:35.301263 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.301270 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.301275 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.301281 | orchestrator | 2025-05-14 02:33:35.301287 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-14 02:33:35.301294 | orchestrator | Wednesday 14 May 2025 02:33:04 +0000 (0:00:05.058) 0:06:54.757 ********* 2025-05-14 02:33:35.301300 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.301306 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.301312 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.301318 | orchestrator | 2025-05-14 02:33:35.301324 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-14 02:33:35.301330 | orchestrator | Wednesday 14 May 2025 02:33:07 +0000 (0:00:03.059) 0:06:57.816 ********* 2025-05-14 02:33:35.301336 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.301343 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.301350 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.301357 | orchestrator | 2025-05-14 02:33:35.301363 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-14 02:33:35.301375 | orchestrator | Wednesday 14 May 2025 02:33:14 +0000 (0:00:06.718) 0:07:04.535 ********* 2025-05-14 02:33:35.301381 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.301387 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.301393 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.301399 | orchestrator | 2025-05-14 02:33:35.301405 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-14 02:33:35.301415 | orchestrator | Wednesday 14 May 2025 02:33:18 +0000 (0:00:03.797) 0:07:08.333 ********* 2025-05-14 02:33:35.301422 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:33:35.301428 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:33:35.301434 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:33:35.301440 | orchestrator | 2025-05-14 02:33:35.301445 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-14 02:33:35.301451 | orchestrator | Wednesday 14 May 2025 02:33:23 +0000 (0:00:04.751) 0:07:13.084 ********* 2025-05-14 02:33:35.301457 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.301463 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.301469 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.301475 | orchestrator | 2025-05-14 02:33:35.301480 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-14 02:33:35.301487 | orchestrator | Wednesday 14 May 2025 02:33:23 +0000 (0:00:00.648) 0:07:13.733 ********* 2025-05-14 02:33:35.301493 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.301499 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.301505 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.301510 | orchestrator | 2025-05-14 02:33:35.301516 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-14 02:33:35.301522 | orchestrator | Wednesday 14 May 2025 02:33:24 +0000 (0:00:00.385) 0:07:14.119 ********* 2025-05-14 02:33:35.301527 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.301533 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.301539 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.301545 | orchestrator | 2025-05-14 02:33:35.301550 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-14 02:33:35.301557 | orchestrator | Wednesday 14 May 2025 02:33:24 +0000 (0:00:00.624) 0:07:14.743 ********* 2025-05-14 02:33:35.301563 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.301569 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.301575 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.301580 | orchestrator | 2025-05-14 02:33:35.301586 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-14 02:33:35.301592 | orchestrator | Wednesday 14 May 2025 02:33:25 +0000 (0:00:00.667) 0:07:15.411 ********* 2025-05-14 02:33:35.301603 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.301609 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.301615 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.301621 | orchestrator | 2025-05-14 02:33:35.301627 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-14 02:33:35.301633 | orchestrator | Wednesday 14 May 2025 02:33:26 +0000 (0:00:00.661) 0:07:16.073 ********* 2025-05-14 02:33:35.301693 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:33:35.301701 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:33:35.301706 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:33:35.301713 | orchestrator | 2025-05-14 02:33:35.301718 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-14 02:33:35.301725 | orchestrator | Wednesday 14 May 2025 02:33:26 +0000 (0:00:00.333) 0:07:16.407 ********* 2025-05-14 02:33:35.301731 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.301737 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.301743 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.301749 | orchestrator | 2025-05-14 02:33:35.301755 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-14 02:33:35.301761 | orchestrator | Wednesday 14 May 2025 02:33:31 +0000 (0:00:05.032) 0:07:21.439 ********* 2025-05-14 02:33:35.301767 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:33:35.301773 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:33:35.301779 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:33:35.301785 | orchestrator | 2025-05-14 02:33:35.301791 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:33:35.301798 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-14 02:33:35.301805 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-14 02:33:35.301811 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-14 02:33:35.301817 | orchestrator | 2025-05-14 02:33:35.301823 | orchestrator | 2025-05-14 02:33:35.301829 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:33:35.301836 | orchestrator | Wednesday 14 May 2025 02:33:32 +0000 (0:00:01.175) 0:07:22.615 ********* 2025-05-14 02:33:35.301842 | orchestrator | =============================================================================== 2025-05-14 02:33:35.301848 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 9.03s 2025-05-14 02:33:35.301853 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.30s 2025-05-14 02:33:35.301859 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 6.72s 2025-05-14 02:33:35.301865 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.32s 2025-05-14 02:33:35.301871 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.06s 2025-05-14 02:33:35.301877 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.33s 2025-05-14 02:33:35.301883 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.29s 2025-05-14 02:33:35.301889 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.21s 2025-05-14 02:33:35.301900 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.12s 2025-05-14 02:33:35.301905 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.10s 2025-05-14 02:33:35.301910 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.06s 2025-05-14 02:33:35.301916 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 5.04s 2025-05-14 02:33:35.301925 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.04s 2025-05-14 02:33:35.301932 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.03s 2025-05-14 02:33:35.301943 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.86s 2025-05-14 02:33:35.301950 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.84s 2025-05-14 02:33:35.301956 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.77s 2025-05-14 02:33:35.301962 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.75s 2025-05-14 02:33:35.301968 | orchestrator | loadbalancer : Ensuring proxysql service config subdirectories exist ---- 4.63s 2025-05-14 02:33:35.301974 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.49s 2025-05-14 02:33:38.330428 | orchestrator | 2025-05-14 02:33:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:38.332377 | orchestrator | 2025-05-14 02:33:38 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:38.336433 | orchestrator | 2025-05-14 02:33:38 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:33:38.338515 | orchestrator | 2025-05-14 02:33:38 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:33:38.338571 | orchestrator | 2025-05-14 02:33:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:41.376161 | orchestrator | 2025-05-14 02:33:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:41.382007 | orchestrator | 2025-05-14 02:33:41 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:41.383166 | orchestrator | 2025-05-14 02:33:41 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:33:41.384877 | orchestrator | 2025-05-14 02:33:41 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:33:41.384908 | orchestrator | 2025-05-14 02:33:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:44.427019 | orchestrator | 2025-05-14 02:33:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:44.427893 | orchestrator | 2025-05-14 02:33:44 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:44.427925 | orchestrator | 2025-05-14 02:33:44 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:33:44.428537 | orchestrator | 2025-05-14 02:33:44 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:33:44.430131 | orchestrator | 2025-05-14 02:33:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:47.485579 | orchestrator | 2025-05-14 02:33:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:47.485718 | orchestrator | 2025-05-14 02:33:47 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:47.485731 | orchestrator | 2025-05-14 02:33:47 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:33:47.485740 | orchestrator | 2025-05-14 02:33:47 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:33:47.485748 | orchestrator | 2025-05-14 02:33:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:50.533622 | orchestrator | 2025-05-14 02:33:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:50.535946 | orchestrator | 2025-05-14 02:33:50 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:50.535982 | orchestrator | 2025-05-14 02:33:50 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:33:50.538841 | orchestrator | 2025-05-14 02:33:50 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:33:50.538877 | orchestrator | 2025-05-14 02:33:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:53.578348 | orchestrator | 2025-05-14 02:33:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:53.580135 | orchestrator | 2025-05-14 02:33:53 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:53.580569 | orchestrator | 2025-05-14 02:33:53 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:33:53.583007 | orchestrator | 2025-05-14 02:33:53 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:33:53.583091 | orchestrator | 2025-05-14 02:33:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:56.622397 | orchestrator | 2025-05-14 02:33:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:56.623672 | orchestrator | 2025-05-14 02:33:56 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:56.627312 | orchestrator | 2025-05-14 02:33:56 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:33:56.628839 | orchestrator | 2025-05-14 02:33:56 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:33:56.628881 | orchestrator | 2025-05-14 02:33:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:59.672384 | orchestrator | 2025-05-14 02:33:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:33:59.676343 | orchestrator | 2025-05-14 02:33:59 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:33:59.676440 | orchestrator | 2025-05-14 02:33:59 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:33:59.683899 | orchestrator | 2025-05-14 02:33:59 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:33:59.684004 | orchestrator | 2025-05-14 02:33:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:02.728022 | orchestrator | 2025-05-14 02:34:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:02.728293 | orchestrator | 2025-05-14 02:34:02 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:02.731993 | orchestrator | 2025-05-14 02:34:02 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:02.732062 | orchestrator | 2025-05-14 02:34:02 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:02.732083 | orchestrator | 2025-05-14 02:34:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:05.769310 | orchestrator | 2025-05-14 02:34:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:05.771932 | orchestrator | 2025-05-14 02:34:05 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:05.776683 | orchestrator | 2025-05-14 02:34:05 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:05.777364 | orchestrator | 2025-05-14 02:34:05 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:05.777489 | orchestrator | 2025-05-14 02:34:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:08.821508 | orchestrator | 2025-05-14 02:34:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:08.823943 | orchestrator | 2025-05-14 02:34:08 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:08.824477 | orchestrator | 2025-05-14 02:34:08 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:08.828089 | orchestrator | 2025-05-14 02:34:08 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:08.828180 | orchestrator | 2025-05-14 02:34:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:11.871430 | orchestrator | 2025-05-14 02:34:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:11.875050 | orchestrator | 2025-05-14 02:34:11 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:11.875806 | orchestrator | 2025-05-14 02:34:11 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:11.877137 | orchestrator | 2025-05-14 02:34:11 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:11.877201 | orchestrator | 2025-05-14 02:34:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:14.936474 | orchestrator | 2025-05-14 02:34:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:14.937372 | orchestrator | 2025-05-14 02:34:14 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:14.939146 | orchestrator | 2025-05-14 02:34:14 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:14.940323 | orchestrator | 2025-05-14 02:34:14 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:14.940378 | orchestrator | 2025-05-14 02:34:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:18.004310 | orchestrator | 2025-05-14 02:34:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:18.006108 | orchestrator | 2025-05-14 02:34:18 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:18.008340 | orchestrator | 2025-05-14 02:34:18 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:18.010166 | orchestrator | 2025-05-14 02:34:18 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:18.010216 | orchestrator | 2025-05-14 02:34:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:21.069280 | orchestrator | 2025-05-14 02:34:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:21.071623 | orchestrator | 2025-05-14 02:34:21 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:21.075703 | orchestrator | 2025-05-14 02:34:21 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:21.076995 | orchestrator | 2025-05-14 02:34:21 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:21.078389 | orchestrator | 2025-05-14 02:34:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:24.133276 | orchestrator | 2025-05-14 02:34:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:24.135726 | orchestrator | 2025-05-14 02:34:24 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:24.136734 | orchestrator | 2025-05-14 02:34:24 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:24.139210 | orchestrator | 2025-05-14 02:34:24 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:24.139253 | orchestrator | 2025-05-14 02:34:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:27.181432 | orchestrator | 2025-05-14 02:34:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:27.182870 | orchestrator | 2025-05-14 02:34:27 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:27.184076 | orchestrator | 2025-05-14 02:34:27 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:27.185236 | orchestrator | 2025-05-14 02:34:27 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:27.185266 | orchestrator | 2025-05-14 02:34:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:30.235092 | orchestrator | 2025-05-14 02:34:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:30.235178 | orchestrator | 2025-05-14 02:34:30 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:30.235187 | orchestrator | 2025-05-14 02:34:30 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:30.235195 | orchestrator | 2025-05-14 02:34:30 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:30.235202 | orchestrator | 2025-05-14 02:34:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:33.291147 | orchestrator | 2025-05-14 02:34:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:33.292991 | orchestrator | 2025-05-14 02:34:33 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:33.294991 | orchestrator | 2025-05-14 02:34:33 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:33.297346 | orchestrator | 2025-05-14 02:34:33 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:33.297508 | orchestrator | 2025-05-14 02:34:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:36.334116 | orchestrator | 2025-05-14 02:34:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:36.335133 | orchestrator | 2025-05-14 02:34:36 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:36.336209 | orchestrator | 2025-05-14 02:34:36 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:36.337141 | orchestrator | 2025-05-14 02:34:36 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:36.337150 | orchestrator | 2025-05-14 02:34:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:39.390579 | orchestrator | 2025-05-14 02:34:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:39.393038 | orchestrator | 2025-05-14 02:34:39 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:39.394916 | orchestrator | 2025-05-14 02:34:39 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:39.396188 | orchestrator | 2025-05-14 02:34:39 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:39.396560 | orchestrator | 2025-05-14 02:34:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:42.450267 | orchestrator | 2025-05-14 02:34:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:42.451785 | orchestrator | 2025-05-14 02:34:42 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:42.454110 | orchestrator | 2025-05-14 02:34:42 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:42.455241 | orchestrator | 2025-05-14 02:34:42 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:42.455703 | orchestrator | 2025-05-14 02:34:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:45.531787 | orchestrator | 2025-05-14 02:34:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:45.531906 | orchestrator | 2025-05-14 02:34:45 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:45.532411 | orchestrator | 2025-05-14 02:34:45 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:45.534263 | orchestrator | 2025-05-14 02:34:45 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:45.534297 | orchestrator | 2025-05-14 02:34:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:48.569927 | orchestrator | 2025-05-14 02:34:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:48.573453 | orchestrator | 2025-05-14 02:34:48 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:48.574320 | orchestrator | 2025-05-14 02:34:48 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:48.575262 | orchestrator | 2025-05-14 02:34:48 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:48.575293 | orchestrator | 2025-05-14 02:34:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:51.635732 | orchestrator | 2025-05-14 02:34:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:51.641566 | orchestrator | 2025-05-14 02:34:51 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:51.645257 | orchestrator | 2025-05-14 02:34:51 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:51.648815 | orchestrator | 2025-05-14 02:34:51 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:51.648897 | orchestrator | 2025-05-14 02:34:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:54.707123 | orchestrator | 2025-05-14 02:34:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:54.707677 | orchestrator | 2025-05-14 02:34:54 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:54.709456 | orchestrator | 2025-05-14 02:34:54 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:54.710976 | orchestrator | 2025-05-14 02:34:54 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:54.711132 | orchestrator | 2025-05-14 02:34:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:57.753905 | orchestrator | 2025-05-14 02:34:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:34:57.754196 | orchestrator | 2025-05-14 02:34:57 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:34:57.755646 | orchestrator | 2025-05-14 02:34:57 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:34:57.755670 | orchestrator | 2025-05-14 02:34:57 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:34:57.755676 | orchestrator | 2025-05-14 02:34:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:00.801507 | orchestrator | 2025-05-14 02:35:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:00.803230 | orchestrator | 2025-05-14 02:35:00 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:00.804351 | orchestrator | 2025-05-14 02:35:00 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:00.805019 | orchestrator | 2025-05-14 02:35:00 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:00.805044 | orchestrator | 2025-05-14 02:35:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:03.851749 | orchestrator | 2025-05-14 02:35:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:03.852031 | orchestrator | 2025-05-14 02:35:03 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:03.852886 | orchestrator | 2025-05-14 02:35:03 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:03.853672 | orchestrator | 2025-05-14 02:35:03 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:03.855738 | orchestrator | 2025-05-14 02:35:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:06.918459 | orchestrator | 2025-05-14 02:35:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:06.918588 | orchestrator | 2025-05-14 02:35:06 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:06.918690 | orchestrator | 2025-05-14 02:35:06 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:06.919289 | orchestrator | 2025-05-14 02:35:06 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:06.919309 | orchestrator | 2025-05-14 02:35:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:09.968875 | orchestrator | 2025-05-14 02:35:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:09.969321 | orchestrator | 2025-05-14 02:35:09 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:09.970718 | orchestrator | 2025-05-14 02:35:09 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:09.971763 | orchestrator | 2025-05-14 02:35:09 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:09.971821 | orchestrator | 2025-05-14 02:35:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:13.021768 | orchestrator | 2025-05-14 02:35:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:13.022410 | orchestrator | 2025-05-14 02:35:13 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:13.023812 | orchestrator | 2025-05-14 02:35:13 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:13.025066 | orchestrator | 2025-05-14 02:35:13 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:13.025089 | orchestrator | 2025-05-14 02:35:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:16.077366 | orchestrator | 2025-05-14 02:35:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:16.077973 | orchestrator | 2025-05-14 02:35:16 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:16.079235 | orchestrator | 2025-05-14 02:35:16 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:16.080580 | orchestrator | 2025-05-14 02:35:16 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:16.080611 | orchestrator | 2025-05-14 02:35:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:19.138076 | orchestrator | 2025-05-14 02:35:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:19.139410 | orchestrator | 2025-05-14 02:35:19 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:19.141570 | orchestrator | 2025-05-14 02:35:19 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:19.143650 | orchestrator | 2025-05-14 02:35:19 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:19.143691 | orchestrator | 2025-05-14 02:35:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:22.188818 | orchestrator | 2025-05-14 02:35:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:22.189279 | orchestrator | 2025-05-14 02:35:22 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:22.190524 | orchestrator | 2025-05-14 02:35:22 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:22.191813 | orchestrator | 2025-05-14 02:35:22 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:22.191831 | orchestrator | 2025-05-14 02:35:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:25.246788 | orchestrator | 2025-05-14 02:35:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:25.246880 | orchestrator | 2025-05-14 02:35:25 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:25.249422 | orchestrator | 2025-05-14 02:35:25 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:25.252349 | orchestrator | 2025-05-14 02:35:25 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:25.252398 | orchestrator | 2025-05-14 02:35:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:28.304419 | orchestrator | 2025-05-14 02:35:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:28.305649 | orchestrator | 2025-05-14 02:35:28 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:28.307332 | orchestrator | 2025-05-14 02:35:28 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:28.309329 | orchestrator | 2025-05-14 02:35:28 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:28.309374 | orchestrator | 2025-05-14 02:35:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:31.357768 | orchestrator | 2025-05-14 02:35:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:31.358589 | orchestrator | 2025-05-14 02:35:31 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:31.361206 | orchestrator | 2025-05-14 02:35:31 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:31.362213 | orchestrator | 2025-05-14 02:35:31 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:31.362423 | orchestrator | 2025-05-14 02:35:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:34.411410 | orchestrator | 2025-05-14 02:35:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:34.412698 | orchestrator | 2025-05-14 02:35:34 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:34.413706 | orchestrator | 2025-05-14 02:35:34 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:34.416257 | orchestrator | 2025-05-14 02:35:34 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:34.416307 | orchestrator | 2025-05-14 02:35:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:37.462178 | orchestrator | 2025-05-14 02:35:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:37.462368 | orchestrator | 2025-05-14 02:35:37 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:37.463141 | orchestrator | 2025-05-14 02:35:37 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:37.464312 | orchestrator | 2025-05-14 02:35:37 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:37.464674 | orchestrator | 2025-05-14 02:35:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:40.500654 | orchestrator | 2025-05-14 02:35:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:40.502407 | orchestrator | 2025-05-14 02:35:40 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:40.504496 | orchestrator | 2025-05-14 02:35:40 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:40.506800 | orchestrator | 2025-05-14 02:35:40 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:40.506876 | orchestrator | 2025-05-14 02:35:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:43.555667 | orchestrator | 2025-05-14 02:35:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:43.557046 | orchestrator | 2025-05-14 02:35:43 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:43.557844 | orchestrator | 2025-05-14 02:35:43 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:43.558997 | orchestrator | 2025-05-14 02:35:43 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:43.559040 | orchestrator | 2025-05-14 02:35:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:46.607194 | orchestrator | 2025-05-14 02:35:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:46.608459 | orchestrator | 2025-05-14 02:35:46 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:46.610591 | orchestrator | 2025-05-14 02:35:46 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:46.612119 | orchestrator | 2025-05-14 02:35:46 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:46.612155 | orchestrator | 2025-05-14 02:35:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:49.668353 | orchestrator | 2025-05-14 02:35:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:49.670292 | orchestrator | 2025-05-14 02:35:49 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:49.673537 | orchestrator | 2025-05-14 02:35:49 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:49.677143 | orchestrator | 2025-05-14 02:35:49 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:49.677222 | orchestrator | 2025-05-14 02:35:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:52.709605 | orchestrator | 2025-05-14 02:35:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:52.709858 | orchestrator | 2025-05-14 02:35:52 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:52.712683 | orchestrator | 2025-05-14 02:35:52 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:52.715447 | orchestrator | 2025-05-14 02:35:52 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:52.715492 | orchestrator | 2025-05-14 02:35:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:55.755681 | orchestrator | 2025-05-14 02:35:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:55.757064 | orchestrator | 2025-05-14 02:35:55 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:55.758131 | orchestrator | 2025-05-14 02:35:55 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:55.759266 | orchestrator | 2025-05-14 02:35:55 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:55.759642 | orchestrator | 2025-05-14 02:35:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:58.805177 | orchestrator | 2025-05-14 02:35:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:35:58.806490 | orchestrator | 2025-05-14 02:35:58 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:35:58.808705 | orchestrator | 2025-05-14 02:35:58 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:35:58.810273 | orchestrator | 2025-05-14 02:35:58 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state STARTED 2025-05-14 02:35:58.810365 | orchestrator | 2025-05-14 02:35:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:01.862888 | orchestrator | 2025-05-14 02:36:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:01.865442 | orchestrator | 2025-05-14 02:36:01 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:01.866541 | orchestrator | 2025-05-14 02:36:01 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:01.870836 | orchestrator | 2025-05-14 02:36:01.870912 | orchestrator | 2025-05-14 02:36:01.870934 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:36:01.870947 | orchestrator | 2025-05-14 02:36:01.870959 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:36:01.870970 | orchestrator | Wednesday 14 May 2025 02:33:36 +0000 (0:00:00.261) 0:00:00.261 ********* 2025-05-14 02:36:01.870982 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:36:01.870994 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:36:01.871005 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:36:01.871016 | orchestrator | 2025-05-14 02:36:01.871027 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:36:01.871038 | orchestrator | Wednesday 14 May 2025 02:33:36 +0000 (0:00:00.315) 0:00:00.577 ********* 2025-05-14 02:36:01.871050 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-14 02:36:01.871081 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-14 02:36:01.871092 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-14 02:36:01.871103 | orchestrator | 2025-05-14 02:36:01.871114 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-14 02:36:01.871125 | orchestrator | 2025-05-14 02:36:01.871136 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 02:36:01.871147 | orchestrator | Wednesday 14 May 2025 02:33:37 +0000 (0:00:00.279) 0:00:00.856 ********* 2025-05-14 02:36:01.871158 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:36:01.871170 | orchestrator | 2025-05-14 02:36:01.871181 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-14 02:36:01.871216 | orchestrator | Wednesday 14 May 2025 02:33:37 +0000 (0:00:00.626) 0:00:01.483 ********* 2025-05-14 02:36:01.871228 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:36:01.871239 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:36:01.871250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:36:01.871261 | orchestrator | 2025-05-14 02:36:01.871272 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-14 02:36:01.871283 | orchestrator | Wednesday 14 May 2025 02:33:38 +0000 (0:00:00.791) 0:00:02.274 ********* 2025-05-14 02:36:01.871298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.871313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.871340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.871361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.871385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.871398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.871412 | orchestrator | 2025-05-14 02:36:01.871425 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 02:36:01.871439 | orchestrator | Wednesday 14 May 2025 02:33:40 +0000 (0:00:01.592) 0:00:03.867 ********* 2025-05-14 02:36:01.871452 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:36:01.871465 | orchestrator | 2025-05-14 02:36:01.871478 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-14 02:36:01.871492 | orchestrator | Wednesday 14 May 2025 02:33:40 +0000 (0:00:00.830) 0:00:04.697 ********* 2025-05-14 02:36:01.871515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.871542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.871555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.871567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.871588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.871631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.871653 | orchestrator | 2025-05-14 02:36:01.871664 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-14 02:36:01.871676 | orchestrator | Wednesday 14 May 2025 02:33:44 +0000 (0:00:03.441) 0:00:08.139 ********* 2025-05-14 02:36:01.871688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:36:01.871700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:36:01.871712 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:36:01.871732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:36:01.871757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:36:01.871769 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:36:01.871781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:36:01.871794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:36:01.871806 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:36:01.871817 | orchestrator | 2025-05-14 02:36:01.871829 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-14 02:36:01.871840 | orchestrator | Wednesday 14 May 2025 02:33:46 +0000 (0:00:01.655) 0:00:09.794 ********* 2025-05-14 02:36:01.871858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:36:01.871883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:36:01.871896 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:36:01.871907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:36:01.871920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:36:01.871931 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:36:01.871948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:36:01.871973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:36:01.871985 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:36:01.871996 | orchestrator | 2025-05-14 02:36:01.872007 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-14 02:36:01.872019 | orchestrator | Wednesday 14 May 2025 02:33:47 +0000 (0:00:01.286) 0:00:11.080 ********* 2025-05-14 02:36:01.872030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.872043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.872054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.872092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.872105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.872118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.872129 | orchestrator | 2025-05-14 02:36:01.872141 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-14 02:36:01.872152 | orchestrator | Wednesday 14 May 2025 02:33:50 +0000 (0:00:02.889) 0:00:13.969 ********* 2025-05-14 02:36:01.872164 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:36:01.872175 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:36:01.872186 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:36:01.872197 | orchestrator | 2025-05-14 02:36:01.872207 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-14 02:36:01.872226 | orchestrator | Wednesday 14 May 2025 02:33:54 +0000 (0:00:04.564) 0:00:18.534 ********* 2025-05-14 02:36:01.872237 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:36:01.872248 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:36:01.872259 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:36:01.872270 | orchestrator | 2025-05-14 02:36:01.872281 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-14 02:36:01.872292 | orchestrator | Wednesday 14 May 2025 02:33:56 +0000 (0:00:02.159) 0:00:20.694 ********* 2025-05-14 02:36:01.872435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.872462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.872483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:36:01.872504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.872564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.872592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:36:01.872641 | orchestrator | 2025-05-14 02:36:01.872659 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 02:36:01.872675 | orchestrator | Wednesday 14 May 2025 02:33:59 +0000 (0:00:02.931) 0:00:23.625 ********* 2025-05-14 02:36:01.872694 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:36:01.872711 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:36:01.872729 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:36:01.872748 | orchestrator | 2025-05-14 02:36:01.872765 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-14 02:36:01.872784 | orchestrator | Wednesday 14 May 2025 02:34:00 +0000 (0:00:00.459) 0:00:24.085 ********* 2025-05-14 02:36:01.872799 | orchestrator | 2025-05-14 02:36:01.872810 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-14 02:36:01.872821 | orchestrator | Wednesday 14 May 2025 02:34:00 +0000 (0:00:00.404) 0:00:24.489 ********* 2025-05-14 02:36:01.872832 | orchestrator | 2025-05-14 02:36:01.872843 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-14 02:36:01.872854 | orchestrator | Wednesday 14 May 2025 02:34:00 +0000 (0:00:00.060) 0:00:24.550 ********* 2025-05-14 02:36:01.872865 | orchestrator | 2025-05-14 02:36:01.872875 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-14 02:36:01.872887 | orchestrator | Wednesday 14 May 2025 02:34:00 +0000 (0:00:00.092) 0:00:24.642 ********* 2025-05-14 02:36:01.872898 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:36:01.872909 | orchestrator | 2025-05-14 02:36:01.872920 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-14 02:36:01.872940 | orchestrator | Wednesday 14 May 2025 02:34:01 +0000 (0:00:00.293) 0:00:24.936 ********* 2025-05-14 02:36:01.872951 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:36:01.872963 | orchestrator | 2025-05-14 02:36:01.872974 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-14 02:36:01.872985 | orchestrator | Wednesday 14 May 2025 02:34:01 +0000 (0:00:00.758) 0:00:25.694 ********* 2025-05-14 02:36:01.872996 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:36:01.873007 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:36:01.873018 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:36:01.873029 | orchestrator | 2025-05-14 02:36:01.873039 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-14 02:36:01.873051 | orchestrator | Wednesday 14 May 2025 02:34:45 +0000 (0:00:43.397) 0:01:09.091 ********* 2025-05-14 02:36:01.873064 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:36:01.873077 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:36:01.873090 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:36:01.873103 | orchestrator | 2025-05-14 02:36:01.873116 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 02:36:01.873129 | orchestrator | Wednesday 14 May 2025 02:35:47 +0000 (0:01:02.498) 0:02:11.590 ********* 2025-05-14 02:36:01.873142 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:36:01.873155 | orchestrator | 2025-05-14 02:36:01.873168 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-14 02:36:01.873180 | orchestrator | Wednesday 14 May 2025 02:35:48 +0000 (0:00:00.762) 0:02:12.353 ********* 2025-05-14 02:36:01.873193 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:36:01.873206 | orchestrator | 2025-05-14 02:36:01.873218 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-14 02:36:01.873231 | orchestrator | Wednesday 14 May 2025 02:35:51 +0000 (0:00:02.663) 0:02:15.016 ********* 2025-05-14 02:36:01.873243 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:36:01.873256 | orchestrator | 2025-05-14 02:36:01.873268 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-14 02:36:01.873281 | orchestrator | Wednesday 14 May 2025 02:35:53 +0000 (0:00:02.513) 0:02:17.529 ********* 2025-05-14 02:36:01.873294 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:36:01.873308 | orchestrator | 2025-05-14 02:36:01.873320 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-14 02:36:01.873333 | orchestrator | Wednesday 14 May 2025 02:35:56 +0000 (0:00:03.060) 0:02:20.590 ********* 2025-05-14 02:36:01.873346 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:36:01.873359 | orchestrator | 2025-05-14 02:36:01.873379 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:36:01.873393 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:36:01.873409 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:36:01.873422 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:36:01.873434 | orchestrator | 2025-05-14 02:36:01.873445 | orchestrator | 2025-05-14 02:36:01.873456 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:36:01.873474 | orchestrator | Wednesday 14 May 2025 02:35:59 +0000 (0:00:03.164) 0:02:23.755 ********* 2025-05-14 02:36:01.873485 | orchestrator | =============================================================================== 2025-05-14 02:36:01.873496 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 62.50s 2025-05-14 02:36:01.873507 | orchestrator | opensearch : Restart opensearch container ------------------------------ 43.40s 2025-05-14 02:36:01.873518 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.56s 2025-05-14 02:36:01.873536 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.44s 2025-05-14 02:36:01.873547 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.16s 2025-05-14 02:36:01.873563 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.06s 2025-05-14 02:36:01.873574 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.93s 2025-05-14 02:36:01.873585 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.89s 2025-05-14 02:36:01.873596 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.66s 2025-05-14 02:36:01.873626 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.51s 2025-05-14 02:36:01.873638 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.16s 2025-05-14 02:36:01.873649 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.66s 2025-05-14 02:36:01.873660 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.59s 2025-05-14 02:36:01.873671 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.29s 2025-05-14 02:36:01.873682 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.83s 2025-05-14 02:36:01.873693 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.79s 2025-05-14 02:36:01.873703 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.76s 2025-05-14 02:36:01.873714 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.76s 2025-05-14 02:36:01.873725 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.63s 2025-05-14 02:36:01.873736 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.56s 2025-05-14 02:36:01.873774 | orchestrator | 2025-05-14 02:36:01 | INFO  | Task 079e0b06-b63f-44de-91ec-45424f5c2aff is in state SUCCESS 2025-05-14 02:36:01.873786 | orchestrator | 2025-05-14 02:36:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:04.927319 | orchestrator | 2025-05-14 02:36:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:04.929253 | orchestrator | 2025-05-14 02:36:04 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:04.931718 | orchestrator | 2025-05-14 02:36:04 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:04.931772 | orchestrator | 2025-05-14 02:36:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:07.991317 | orchestrator | 2025-05-14 02:36:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:07.995573 | orchestrator | 2025-05-14 02:36:07 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:07.997383 | orchestrator | 2025-05-14 02:36:07 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:07.998079 | orchestrator | 2025-05-14 02:36:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:11.050353 | orchestrator | 2025-05-14 02:36:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:11.052911 | orchestrator | 2025-05-14 02:36:11 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:11.054294 | orchestrator | 2025-05-14 02:36:11 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:11.054324 | orchestrator | 2025-05-14 02:36:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:14.104391 | orchestrator | 2025-05-14 02:36:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:14.105758 | orchestrator | 2025-05-14 02:36:14 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:14.106972 | orchestrator | 2025-05-14 02:36:14 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:14.107001 | orchestrator | 2025-05-14 02:36:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:17.161515 | orchestrator | 2025-05-14 02:36:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:17.163120 | orchestrator | 2025-05-14 02:36:17 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:17.165682 | orchestrator | 2025-05-14 02:36:17 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:17.165746 | orchestrator | 2025-05-14 02:36:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:20.225847 | orchestrator | 2025-05-14 02:36:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:20.226249 | orchestrator | 2025-05-14 02:36:20 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:20.226755 | orchestrator | 2025-05-14 02:36:20 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:20.226789 | orchestrator | 2025-05-14 02:36:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:23.273448 | orchestrator | 2025-05-14 02:36:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:23.275659 | orchestrator | 2025-05-14 02:36:23 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:23.280112 | orchestrator | 2025-05-14 02:36:23 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:23.280382 | orchestrator | 2025-05-14 02:36:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:26.325847 | orchestrator | 2025-05-14 02:36:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:26.327959 | orchestrator | 2025-05-14 02:36:26 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:26.329508 | orchestrator | 2025-05-14 02:36:26 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:26.329553 | orchestrator | 2025-05-14 02:36:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:29.373287 | orchestrator | 2025-05-14 02:36:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:29.374327 | orchestrator | 2025-05-14 02:36:29 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:29.376160 | orchestrator | 2025-05-14 02:36:29 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:29.376201 | orchestrator | 2025-05-14 02:36:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:32.425132 | orchestrator | 2025-05-14 02:36:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:32.427422 | orchestrator | 2025-05-14 02:36:32 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:32.429674 | orchestrator | 2025-05-14 02:36:32 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:32.429754 | orchestrator | 2025-05-14 02:36:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:35.481302 | orchestrator | 2025-05-14 02:36:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:35.482924 | orchestrator | 2025-05-14 02:36:35 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:35.484811 | orchestrator | 2025-05-14 02:36:35 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:35.484871 | orchestrator | 2025-05-14 02:36:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:38.536966 | orchestrator | 2025-05-14 02:36:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:38.539963 | orchestrator | 2025-05-14 02:36:38 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:38.542365 | orchestrator | 2025-05-14 02:36:38 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:38.542472 | orchestrator | 2025-05-14 02:36:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:41.595496 | orchestrator | 2025-05-14 02:36:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:41.597344 | orchestrator | 2025-05-14 02:36:41 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:41.598682 | orchestrator | 2025-05-14 02:36:41 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:41.599020 | orchestrator | 2025-05-14 02:36:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:44.651985 | orchestrator | 2025-05-14 02:36:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:44.659483 | orchestrator | 2025-05-14 02:36:44 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:44.663829 | orchestrator | 2025-05-14 02:36:44 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:44.663899 | orchestrator | 2025-05-14 02:36:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:47.724697 | orchestrator | 2025-05-14 02:36:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:47.726579 | orchestrator | 2025-05-14 02:36:47 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:47.728501 | orchestrator | 2025-05-14 02:36:47 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:47.728574 | orchestrator | 2025-05-14 02:36:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:50.781784 | orchestrator | 2025-05-14 02:36:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:50.783997 | orchestrator | 2025-05-14 02:36:50 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:50.786168 | orchestrator | 2025-05-14 02:36:50 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:50.786210 | orchestrator | 2025-05-14 02:36:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:53.843100 | orchestrator | 2025-05-14 02:36:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:53.849169 | orchestrator | 2025-05-14 02:36:53 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:53.849259 | orchestrator | 2025-05-14 02:36:53 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:53.849276 | orchestrator | 2025-05-14 02:36:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:56.894590 | orchestrator | 2025-05-14 02:36:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:56.897859 | orchestrator | 2025-05-14 02:36:56 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:56.900339 | orchestrator | 2025-05-14 02:36:56 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:56.900488 | orchestrator | 2025-05-14 02:36:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:59.944008 | orchestrator | 2025-05-14 02:36:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:36:59.945401 | orchestrator | 2025-05-14 02:36:59 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state STARTED 2025-05-14 02:36:59.948031 | orchestrator | 2025-05-14 02:36:59 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:36:59.948251 | orchestrator | 2025-05-14 02:36:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:03.006123 | orchestrator | 2025-05-14 02:37:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:03.007233 | orchestrator | 2025-05-14 02:37:03 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:03.013472 | orchestrator | 2025-05-14 02:37:03 | INFO  | Task 9a9341a3-fba1-4485-b11c-2f04f19927b1 is in state SUCCESS 2025-05-14 02:37:03.015979 | orchestrator | 2025-05-14 02:37:03.016074 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:37:03.016102 | orchestrator | 2025-05-14 02:37:03.016121 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-14 02:37:03.016142 | orchestrator | 2025-05-14 02:37:03.016161 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-14 02:37:03.016180 | orchestrator | Wednesday 14 May 2025 02:23:44 +0000 (0:00:01.980) 0:00:01.980 ********* 2025-05-14 02:37:03.016198 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.016211 | orchestrator | 2025-05-14 02:37:03.016222 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-14 02:37:03.016234 | orchestrator | Wednesday 14 May 2025 02:23:46 +0000 (0:00:01.321) 0:00:03.302 ********* 2025-05-14 02:37:03.016245 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:37:03.016257 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 02:37:03.016268 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 02:37:03.016279 | orchestrator | 2025-05-14 02:37:03.016289 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-14 02:37:03.016300 | orchestrator | Wednesday 14 May 2025 02:23:46 +0000 (0:00:00.657) 0:00:03.959 ********* 2025-05-14 02:37:03.016312 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.016323 | orchestrator | 2025-05-14 02:37:03.016352 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-14 02:37:03.016363 | orchestrator | Wednesday 14 May 2025 02:23:47 +0000 (0:00:00.980) 0:00:04.940 ********* 2025-05-14 02:37:03.016375 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.016386 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.016397 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.016407 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.016418 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.016429 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.016440 | orchestrator | 2025-05-14 02:37:03.016451 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-14 02:37:03.016462 | orchestrator | Wednesday 14 May 2025 02:23:49 +0000 (0:00:01.461) 0:00:06.402 ********* 2025-05-14 02:37:03.016472 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.016483 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.016494 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.016505 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.016515 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.016551 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.016565 | orchestrator | 2025-05-14 02:37:03.016578 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-14 02:37:03.016618 | orchestrator | Wednesday 14 May 2025 02:23:50 +0000 (0:00:00.914) 0:00:07.317 ********* 2025-05-14 02:37:03.016644 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.016659 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.016672 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.016685 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.016698 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.016711 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.016724 | orchestrator | 2025-05-14 02:37:03.016737 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-14 02:37:03.016751 | orchestrator | Wednesday 14 May 2025 02:23:51 +0000 (0:00:01.307) 0:00:08.624 ********* 2025-05-14 02:37:03.016764 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.016776 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.016789 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.016802 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.016814 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.016827 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.016840 | orchestrator | 2025-05-14 02:37:03.016853 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-14 02:37:03.016867 | orchestrator | Wednesday 14 May 2025 02:23:52 +0000 (0:00:01.192) 0:00:09.817 ********* 2025-05-14 02:37:03.016880 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.016891 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.016902 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.016912 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.016923 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.016934 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.016944 | orchestrator | 2025-05-14 02:37:03.016955 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-14 02:37:03.016966 | orchestrator | Wednesday 14 May 2025 02:23:53 +0000 (0:00:01.158) 0:00:10.976 ********* 2025-05-14 02:37:03.016976 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.016987 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.016998 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.017008 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.017019 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.017029 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.017040 | orchestrator | 2025-05-14 02:37:03.017050 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-14 02:37:03.017061 | orchestrator | Wednesday 14 May 2025 02:23:55 +0000 (0:00:01.026) 0:00:12.003 ********* 2025-05-14 02:37:03.017072 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.017084 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.017095 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.017105 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.017116 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.017127 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.017137 | orchestrator | 2025-05-14 02:37:03.017148 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-14 02:37:03.017159 | orchestrator | Wednesday 14 May 2025 02:23:56 +0000 (0:00:01.042) 0:00:13.045 ********* 2025-05-14 02:37:03.017170 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.017181 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.017192 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.017202 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.017213 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.017224 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.017234 | orchestrator | 2025-05-14 02:37:03.017265 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-14 02:37:03.017277 | orchestrator | Wednesday 14 May 2025 02:23:57 +0000 (0:00:01.139) 0:00:14.185 ********* 2025-05-14 02:37:03.017297 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:37:03.017308 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:37:03.017319 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:37:03.017330 | orchestrator | 2025-05-14 02:37:03.017341 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-14 02:37:03.017352 | orchestrator | Wednesday 14 May 2025 02:23:58 +0000 (0:00:01.257) 0:00:15.443 ********* 2025-05-14 02:37:03.017363 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.017374 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.017385 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.017395 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.017406 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.017417 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.017427 | orchestrator | 2025-05-14 02:37:03.017438 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-14 02:37:03.017449 | orchestrator | Wednesday 14 May 2025 02:24:00 +0000 (0:00:01.924) 0:00:17.367 ********* 2025-05-14 02:37:03.017460 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:37:03.017471 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:37:03.017482 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:37:03.017492 | orchestrator | 2025-05-14 02:37:03.017503 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-14 02:37:03.017520 | orchestrator | Wednesday 14 May 2025 02:24:03 +0000 (0:00:03.008) 0:00:20.375 ********* 2025-05-14 02:37:03.017531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:37:03.017542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:37:03.017552 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:37:03.017563 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.017574 | orchestrator | 2025-05-14 02:37:03.017585 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-14 02:37:03.017621 | orchestrator | Wednesday 14 May 2025 02:24:03 +0000 (0:00:00.432) 0:00:20.808 ********* 2025-05-14 02:37:03.017635 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 02:37:03.017649 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 02:37:03.017660 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 02:37:03.017672 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.017683 | orchestrator | 2025-05-14 02:37:03.017694 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-14 02:37:03.017705 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.692) 0:00:21.500 ********* 2025-05-14 02:37:03.017718 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:37:03.017731 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:37:03.017751 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:37:03.017762 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.017774 | orchestrator | 2025-05-14 02:37:03.017785 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-14 02:37:03.017803 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.174) 0:00:21.674 ********* 2025-05-14 02:37:03.017817 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-14 02:24:01.087081', 'end': '2025-05-14 02:24:01.342291', 'delta': '0:00:00.255210', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 02:37:03.017837 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-14 02:24:01.912379', 'end': '2025-05-14 02:24:02.195552', 'delta': '0:00:00.283173', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 02:37:03.017849 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-14 02:24:02.901711', 'end': '2025-05-14 02:24:03.155719', 'delta': '0:00:00.254008', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 02:37:03.017861 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.017872 | orchestrator | 2025-05-14 02:37:03.017883 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-14 02:37:03.017894 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.197) 0:00:21.872 ********* 2025-05-14 02:37:03.017905 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.017916 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.017927 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.017938 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.017949 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.017960 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.017970 | orchestrator | 2025-05-14 02:37:03.017981 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-14 02:37:03.017999 | orchestrator | Wednesday 14 May 2025 02:24:06 +0000 (0:00:01.291) 0:00:23.163 ********* 2025-05-14 02:37:03.018011 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.018074 | orchestrator | 2025-05-14 02:37:03.018088 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-14 02:37:03.018107 | orchestrator | Wednesday 14 May 2025 02:24:06 +0000 (0:00:00.710) 0:00:23.874 ********* 2025-05-14 02:37:03.018127 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.018151 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.018177 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.018195 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.018213 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.018230 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.018246 | orchestrator | 2025-05-14 02:37:03.018263 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-14 02:37:03.018279 | orchestrator | Wednesday 14 May 2025 02:24:07 +0000 (0:00:00.861) 0:00:24.735 ********* 2025-05-14 02:37:03.018298 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.018314 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.018332 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.018351 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.018369 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.018388 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.018407 | orchestrator | 2025-05-14 02:37:03.018427 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:37:03.018446 | orchestrator | Wednesday 14 May 2025 02:24:09 +0000 (0:00:02.225) 0:00:26.961 ********* 2025-05-14 02:37:03.018460 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.018471 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.018482 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.018493 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.018503 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.018514 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.018525 | orchestrator | 2025-05-14 02:37:03.018536 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-14 02:37:03.018547 | orchestrator | Wednesday 14 May 2025 02:24:10 +0000 (0:00:00.967) 0:00:27.928 ********* 2025-05-14 02:37:03.018568 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.018580 | orchestrator | 2025-05-14 02:37:03.018591 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-14 02:37:03.018653 | orchestrator | Wednesday 14 May 2025 02:24:11 +0000 (0:00:00.187) 0:00:28.116 ********* 2025-05-14 02:37:03.018664 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.018675 | orchestrator | 2025-05-14 02:37:03.018686 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:37:03.018697 | orchestrator | Wednesday 14 May 2025 02:24:12 +0000 (0:00:01.400) 0:00:29.517 ********* 2025-05-14 02:37:03.018708 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.018719 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.018729 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.018740 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.018751 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.018762 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.018772 | orchestrator | 2025-05-14 02:37:03.018783 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-14 02:37:03.018794 | orchestrator | Wednesday 14 May 2025 02:24:13 +0000 (0:00:00.809) 0:00:30.327 ********* 2025-05-14 02:37:03.018805 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.018815 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.018826 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.018836 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.018847 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.018857 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.018868 | orchestrator | 2025-05-14 02:37:03.018890 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-14 02:37:03.018901 | orchestrator | Wednesday 14 May 2025 02:24:14 +0000 (0:00:01.181) 0:00:31.508 ********* 2025-05-14 02:37:03.018912 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.018923 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.018934 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.018952 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.018963 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.018974 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.018985 | orchestrator | 2025-05-14 02:37:03.018997 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-14 02:37:03.019008 | orchestrator | Wednesday 14 May 2025 02:24:15 +0000 (0:00:00.880) 0:00:32.389 ********* 2025-05-14 02:37:03.019019 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.019030 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.019041 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.019052 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.019063 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.019074 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.019085 | orchestrator | 2025-05-14 02:37:03.019096 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-14 02:37:03.019107 | orchestrator | Wednesday 14 May 2025 02:24:16 +0000 (0:00:00.934) 0:00:33.324 ********* 2025-05-14 02:37:03.019118 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.019129 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.019140 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.019151 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.019161 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.019172 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.019183 | orchestrator | 2025-05-14 02:37:03.019194 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-14 02:37:03.019205 | orchestrator | Wednesday 14 May 2025 02:24:16 +0000 (0:00:00.659) 0:00:33.983 ********* 2025-05-14 02:37:03.019216 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.019227 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.019238 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.019249 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.019260 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.019271 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.019282 | orchestrator | 2025-05-14 02:37:03.019293 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-14 02:37:03.019304 | orchestrator | Wednesday 14 May 2025 02:24:18 +0000 (0:00:01.024) 0:00:35.007 ********* 2025-05-14 02:37:03.019315 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.019326 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.019337 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.019348 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.019359 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.019369 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.019380 | orchestrator | 2025-05-14 02:37:03.019391 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-14 02:37:03.019402 | orchestrator | Wednesday 14 May 2025 02:24:18 +0000 (0:00:00.717) 0:00:35.724 ********* 2025-05-14 02:37:03.019415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008', 'scsi-SQEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part1', 'scsi-SQEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part14', 'scsi-SQEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part15', 'scsi-SQEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part16', 'scsi-SQEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.019567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-40-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.019585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019771 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.019795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a52c913-9b08-49a5-b109-def0ab7dcd30', 'scsi-SQEMU_QEMU_HARDDISK_8a52c913-9b08-49a5-b109-def0ab7dcd30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a52c913-9b08-49a5-b109-def0ab7dcd30-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a52c913-9b08-49a5-b109-def0ab7dcd30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a52c913-9b08-49a5-b109-def0ab7dcd30-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a52c913-9b08-49a5-b109-def0ab7dcd30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a52c913-9b08-49a5-b109-def0ab7dcd30-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a52c913-9b08-49a5-b109-def0ab7dcd30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a52c913-9b08-49a5-b109-def0ab7dcd30-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a52c913-9b08-49a5-b109-def0ab7dcd30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.019880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-40-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.019897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.019990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020010 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.020036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5815f41e-a950-4348-941c-f26c72002134', 'scsi-SQEMU_QEMU_HARDDISK_5815f41e-a950-4348-941c-f26c72002134'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5815f41e-a950-4348-941c-f26c72002134-part1', 'scsi-SQEMU_QEMU_HARDDISK_5815f41e-a950-4348-941c-f26c72002134-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5815f41e-a950-4348-941c-f26c72002134-part14', 'scsi-SQEMU_QEMU_HARDDISK_5815f41e-a950-4348-941c-f26c72002134-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5815f41e-a950-4348-941c-f26c72002134-part15', 'scsi-SQEMU_QEMU_HARDDISK_5815f41e-a950-4348-941c-f26c72002134-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5815f41e-a950-4348-941c-f26c72002134-part16', 'scsi-SQEMU_QEMU_HARDDISK_5815f41e-a950-4348-941c-f26c72002134-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-40-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--caf94b5f--07a0--5316--9d7c--8f668ab64c5d-osd--block--caf94b5f--07a0--5316--9d7c--8f668ab64c5d', 'dm-uuid-LVM-ZTOMnjaLSd9SUt3iz7042ZI7zHa7ehDAKJlCxan9qclgcEPHFYha1Tc6FZ3eWICR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020143 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.020154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0a91196--50f5--599a--8231--3d981ca1eca9-osd--block--a0a91196--50f5--599a--8231--3d981ca1eca9', 'dm-uuid-LVM-DdSgoItp3kLzXGfqSWc7KV1e81S9ldTEsDDmSkFMuQLBzYYJUOIHqcN4rbQfCsu0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea3c2360--3d2e--5360--8839--85b817b77bc3-osd--block--ea3c2360--3d2e--5360--8839--85b817b77bc3', 'dm-uuid-LVM-ZvfW4xHeBxWJ0JwFq55oHg9Eas3fybM2dZ1b2IRfrayLX44ir6xE7p01kFSQwchZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fecac30f--087c--5b0b--83ef--f9d2b642a995-osd--block--fecac30f--087c--5b0b--83ef--f9d2b642a995', 'dm-uuid-LVM-F1SoaSTraaxmWaDqVV9hFSecVNiGPTB9OoM2w1nei0W0EK61FawRIFDaITD4sMEw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1', 'scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--caf94b5f--07a0--5316--9d7c--8f668ab64c5d-osd--block--caf94b5f--07a0--5316--9d7c--8f668ab64c5d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c6J53J-1ruA-Aj0l-N1eD-BZNl-R7xJ-EDOTDH', 'scsi-0QEMU_QEMU_HARDDISK_6c9e420d-0c60-4ebc-ac19-f905b2b7a82f', 'scsi-SQEMU_QEMU_HARDDISK_6c9e420d-0c60-4ebc-ac19-f905b2b7a82f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a0a91196--50f5--599a--8231--3d981ca1eca9-osd--block--a0a91196--50f5--599a--8231--3d981ca1eca9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BubS93-4kSO-DDMx-NQ79-XP0m-dbZL-Q4o4N0', 'scsi-0QEMU_QEMU_HARDDISK_7c39c8ea-7878-4e89-b4ec-61bbe868aea7', 'scsi-SQEMU_QEMU_HARDDISK_7c39c8ea-7878-4e89-b4ec-61bbe868aea7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e31a2ff7-84d9-48c9-b0e1-1526f23b46b1', 'scsi-SQEMU_QEMU_HARDDISK_e31a2ff7-84d9-48c9-b0e1-1526f23b46b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-40-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b', 'scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part1', 'scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part14', 'scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part15', 'scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part16', 'scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ea3c2360--3d2e--5360--8839--85b817b77bc3-osd--block--ea3c2360--3d2e--5360--8839--85b817b77bc3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xuEc73-D3ov-BEsr-vtcG-WkfW-YFFh-RkQRMM', 'scsi-0QEMU_QEMU_HARDDISK_2fe9822d-742a-4109-b2fd-4f62bd011e9b', 'scsi-SQEMU_QEMU_HARDDISK_2fe9822d-742a-4109-b2fd-4f62bd011e9b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fecac30f--087c--5b0b--83ef--f9d2b642a995-osd--block--fecac30f--087c--5b0b--83ef--f9d2b642a995'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6rroif-CDia-e2Uf-a8Gz-xMld-tzkG-Fpt7Ie', 'scsi-0QEMU_QEMU_HARDDISK_4bf8951c-ead1-422f-8e98-563fd238f873', 'scsi-SQEMU_QEMU_HARDDISK_4bf8951c-ead1-422f-8e98-563fd238f873'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9158ba9c-f661-457a-83a0-7301d2e715e9', 'scsi-SQEMU_QEMU_HARDDISK_9158ba9c-f661-457a-83a0-7301d2e715e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-40-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020588 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.020633 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.020662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03d77871--dede--5752--b4dd--afb6f86d8bca-osd--block--03d77871--dede--5752--b4dd--afb6f86d8bca', 'dm-uuid-LVM-IDAJ819ekzEGVYidaDTaD9Y5ZOiWCmRfi1FSgSb4gPJkINyqialcVodaMedaJccO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c7e27ae--f126--51b5--99e7--7e9908cad598-osd--block--0c7e27ae--f126--51b5--99e7--7e9908cad598', 'dm-uuid-LVM-XBp7l9yF39H6kNCz4oRlhe3vRMb8Tg516CaEYxFsVRTfIpPKJFIUvwBRmiKncpNN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:37:03.020822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53', 'scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part1', 'scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part14', 'scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part15', 'scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part16', 'scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--03d77871--dede--5752--b4dd--afb6f86d8bca-osd--block--03d77871--dede--5752--b4dd--afb6f86d8bca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yDHJI6-oSxo-4ect-JY34-UAmO-wAly-O0hYht', 'scsi-0QEMU_QEMU_HARDDISK_7d716f79-cf1d-4cd5-9251-d30dd616fe8c', 'scsi-SQEMU_QEMU_HARDDISK_7d716f79-cf1d-4cd5-9251-d30dd616fe8c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0c7e27ae--f126--51b5--99e7--7e9908cad598-osd--block--0c7e27ae--f126--51b5--99e7--7e9908cad598'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JF6WJR-Mtz9-qQlp-5BLF-QwSQ-lxFw-fvvc0G', 'scsi-0QEMU_QEMU_HARDDISK_276d5307-5ea7-4279-8794-03223ea8507b', 'scsi-SQEMU_QEMU_HARDDISK_276d5307-5ea7-4279-8794-03223ea8507b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07a08b1a-3bd9-437e-a737-9a0e3fc440bf', 'scsi-SQEMU_QEMU_HARDDISK_07a08b1a-3bd9-437e-a737-9a0e3fc440bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-40-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:37:03.020899 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.020910 | orchestrator | 2025-05-14 02:37:03.020922 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-14 02:37:03.020934 | orchestrator | Wednesday 14 May 2025 02:24:20 +0000 (0:00:01.896) 0:00:37.621 ********* 2025-05-14 02:37:03.020945 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.020956 | orchestrator | 2025-05-14 02:37:03.020967 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-14 02:37:03.020978 | orchestrator | Wednesday 14 May 2025 02:24:20 +0000 (0:00:00.354) 0:00:37.975 ********* 2025-05-14 02:37:03.020988 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.021006 | orchestrator | 2025-05-14 02:37:03.021017 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-14 02:37:03.021032 | orchestrator | Wednesday 14 May 2025 02:24:21 +0000 (0:00:00.196) 0:00:38.172 ********* 2025-05-14 02:37:03.021043 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.021054 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.021065 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.021076 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.021088 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.021099 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.021110 | orchestrator | 2025-05-14 02:37:03.021120 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-14 02:37:03.021131 | orchestrator | Wednesday 14 May 2025 02:24:22 +0000 (0:00:00.898) 0:00:39.071 ********* 2025-05-14 02:37:03.021142 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.021153 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.021164 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.021175 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.021185 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.021196 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.021207 | orchestrator | 2025-05-14 02:37:03.021218 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-14 02:37:03.021228 | orchestrator | Wednesday 14 May 2025 02:24:23 +0000 (0:00:01.638) 0:00:40.709 ********* 2025-05-14 02:37:03.021239 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.021250 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.021265 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.021282 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.021300 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.021318 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.021336 | orchestrator | 2025-05-14 02:37:03.021353 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:37:03.021364 | orchestrator | Wednesday 14 May 2025 02:24:24 +0000 (0:00:00.875) 0:00:41.585 ********* 2025-05-14 02:37:03.021375 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.021386 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.021397 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.021408 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.021419 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.021430 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.021441 | orchestrator | 2025-05-14 02:37:03.021452 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:37:03.021463 | orchestrator | Wednesday 14 May 2025 02:24:25 +0000 (0:00:01.260) 0:00:42.845 ********* 2025-05-14 02:37:03.021474 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.021485 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.021496 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.021507 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.021518 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.021528 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.021539 | orchestrator | 2025-05-14 02:37:03.021550 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:37:03.021561 | orchestrator | Wednesday 14 May 2025 02:24:27 +0000 (0:00:01.306) 0:00:44.152 ********* 2025-05-14 02:37:03.021572 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.021583 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.021660 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.021674 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.021686 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.021697 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.021708 | orchestrator | 2025-05-14 02:37:03.021719 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:37:03.021729 | orchestrator | Wednesday 14 May 2025 02:24:28 +0000 (0:00:01.205) 0:00:45.357 ********* 2025-05-14 02:37:03.021747 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.021757 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.021767 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.021777 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.021787 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.021796 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.021807 | orchestrator | 2025-05-14 02:37:03.021824 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-14 02:37:03.021848 | orchestrator | Wednesday 14 May 2025 02:24:29 +0000 (0:00:00.990) 0:00:46.348 ********* 2025-05-14 02:37:03.021868 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:37:03.021908 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:37:03.021925 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:37:03.021935 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 02:37:03.021945 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.021955 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 02:37:03.021964 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 02:37:03.021974 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 02:37:03.021983 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 02:37:03.021993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:37:03.022003 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 02:37:03.022012 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.022056 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.022067 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:37:03.022076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:37:03.022086 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:37:03.022096 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:37:03.022105 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.022115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:37:03.022125 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.022135 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:37:03.022144 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:37:03.022154 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:37:03.022164 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.022174 | orchestrator | 2025-05-14 02:37:03.022191 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-14 02:37:03.022201 | orchestrator | Wednesday 14 May 2025 02:24:32 +0000 (0:00:03.638) 0:00:49.986 ********* 2025-05-14 02:37:03.022211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:37:03.022221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:37:03.022230 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 02:37:03.022240 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 02:37:03.022250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:37:03.022259 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 02:37:03.022269 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 02:37:03.022278 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.022288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:37:03.022298 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 02:37:03.022307 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 02:37:03.022316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:37:03.022334 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.022344 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.022354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:37:03.022364 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.022373 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:37:03.022383 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:37:03.022392 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:37:03.022402 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:37:03.022411 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:37:03.022421 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.022430 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:37:03.022440 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.022449 | orchestrator | 2025-05-14 02:37:03.022459 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-14 02:37:03.022469 | orchestrator | Wednesday 14 May 2025 02:24:35 +0000 (0:00:02.690) 0:00:52.677 ********* 2025-05-14 02:37:03.022479 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:37:03.022488 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-14 02:37:03.022498 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 02:37:03.022508 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-14 02:37:03.022518 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 02:37:03.022527 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-14 02:37:03.022537 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-14 02:37:03.022547 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-14 02:37:03.022556 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-14 02:37:03.022566 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-14 02:37:03.022576 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-14 02:37:03.022585 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-14 02:37:03.022620 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-14 02:37:03.022632 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-14 02:37:03.022642 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-14 02:37:03.022651 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-14 02:37:03.022661 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-14 02:37:03.022671 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-14 02:37:03.022681 | orchestrator | 2025-05-14 02:37:03.022690 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-14 02:37:03.022714 | orchestrator | Wednesday 14 May 2025 02:24:40 +0000 (0:00:04.501) 0:00:57.179 ********* 2025-05-14 02:37:03.022724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:37:03.022734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:37:03.022744 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:37:03.022754 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 02:37:03.022763 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 02:37:03.022773 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 02:37:03.022783 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.022792 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 02:37:03.022802 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 02:37:03.022817 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 02:37:03.022833 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.022850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:37:03.022878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:37:03.022895 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.022912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:37:03.022927 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:37:03.022942 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:37:03.022952 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.022962 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:37:03.022971 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.022987 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:37:03.022997 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:37:03.023007 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:37:03.023016 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.023026 | orchestrator | 2025-05-14 02:37:03.023036 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-14 02:37:03.023046 | orchestrator | Wednesday 14 May 2025 02:24:41 +0000 (0:00:01.438) 0:00:58.617 ********* 2025-05-14 02:37:03.023055 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:37:03.023065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:37:03.023075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:37:03.023084 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 02:37:03.023094 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 02:37:03.023103 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 02:37:03.023113 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.023123 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 02:37:03.023132 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 02:37:03.023142 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 02:37:03.023152 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.023161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:37:03.023170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:37:03.023180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:37:03.023190 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.023199 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:37:03.023208 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:37:03.023218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:37:03.023227 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.023237 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.023247 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:37:03.023257 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:37:03.023267 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:37:03.023276 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.023286 | orchestrator | 2025-05-14 02:37:03.023296 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-14 02:37:03.023305 | orchestrator | Wednesday 14 May 2025 02:24:42 +0000 (0:00:01.298) 0:00:59.916 ********* 2025-05-14 02:37:03.023315 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-14 02:37:03.023326 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:37:03.023336 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:37:03.023346 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:37:03.023362 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-14 02:37:03.023372 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:37:03.023381 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:37:03.023391 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:37:03.023401 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-14 02:37:03.023419 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:37:03.023430 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:37:03.023440 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:37:03.023450 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:37:03.023460 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:37:03.023469 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:37:03.023479 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.023489 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.023498 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:37:03.023508 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:37:03.023518 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:37:03.023527 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.023537 | orchestrator | 2025-05-14 02:37:03.023547 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-14 02:37:03.023556 | orchestrator | Wednesday 14 May 2025 02:24:44 +0000 (0:00:01.259) 0:01:01.175 ********* 2025-05-14 02:37:03.023566 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.023576 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.023586 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.023618 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.023629 | orchestrator | 2025-05-14 02:37:03.023640 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:37:03.023651 | orchestrator | Wednesday 14 May 2025 02:24:45 +0000 (0:00:01.406) 0:01:02.581 ********* 2025-05-14 02:37:03.023661 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.023671 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.023681 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.023690 | orchestrator | 2025-05-14 02:37:03.023700 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:37:03.023711 | orchestrator | Wednesday 14 May 2025 02:24:46 +0000 (0:00:00.533) 0:01:03.115 ********* 2025-05-14 02:37:03.023720 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.023730 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.023740 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.023749 | orchestrator | 2025-05-14 02:37:03.023759 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:37:03.023769 | orchestrator | Wednesday 14 May 2025 02:24:46 +0000 (0:00:00.640) 0:01:03.755 ********* 2025-05-14 02:37:03.023778 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.023788 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.023797 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.023807 | orchestrator | 2025-05-14 02:37:03.023816 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:37:03.023832 | orchestrator | Wednesday 14 May 2025 02:24:47 +0000 (0:00:00.584) 0:01:04.339 ********* 2025-05-14 02:37:03.023842 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.023852 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.023862 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.023872 | orchestrator | 2025-05-14 02:37:03.023882 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:37:03.023892 | orchestrator | Wednesday 14 May 2025 02:24:48 +0000 (0:00:01.302) 0:01:05.642 ********* 2025-05-14 02:37:03.023901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.023911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.023921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.023930 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.023940 | orchestrator | 2025-05-14 02:37:03.023950 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:37:03.023960 | orchestrator | Wednesday 14 May 2025 02:24:49 +0000 (0:00:01.205) 0:01:06.847 ********* 2025-05-14 02:37:03.023970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.023979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.023989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.023999 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.024008 | orchestrator | 2025-05-14 02:37:03.024018 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:37:03.024028 | orchestrator | Wednesday 14 May 2025 02:24:50 +0000 (0:00:00.755) 0:01:07.603 ********* 2025-05-14 02:37:03.024038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.024047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.024057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.024066 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.024076 | orchestrator | 2025-05-14 02:37:03.024086 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.024095 | orchestrator | Wednesday 14 May 2025 02:24:52 +0000 (0:00:01.648) 0:01:09.252 ********* 2025-05-14 02:37:03.024105 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.024115 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.024124 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.024134 | orchestrator | 2025-05-14 02:37:03.024144 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:37:03.024188 | orchestrator | Wednesday 14 May 2025 02:24:52 +0000 (0:00:00.611) 0:01:09.864 ********* 2025-05-14 02:37:03.024199 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-14 02:37:03.024209 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 02:37:03.024219 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-14 02:37:03.024228 | orchestrator | 2025-05-14 02:37:03.024238 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:37:03.024247 | orchestrator | Wednesday 14 May 2025 02:24:54 +0000 (0:00:01.527) 0:01:11.391 ********* 2025-05-14 02:37:03.024257 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.024266 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.024275 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.024285 | orchestrator | 2025-05-14 02:37:03.024295 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.024305 | orchestrator | Wednesday 14 May 2025 02:24:55 +0000 (0:00:00.902) 0:01:12.293 ********* 2025-05-14 02:37:03.024315 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.024324 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.024334 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.024344 | orchestrator | 2025-05-14 02:37:03.024353 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:37:03.024369 | orchestrator | Wednesday 14 May 2025 02:24:56 +0000 (0:00:00.882) 0:01:13.175 ********* 2025-05-14 02:37:03.024379 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.024389 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.024399 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.024408 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.024418 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.024428 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.024437 | orchestrator | 2025-05-14 02:37:03.024447 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:37:03.024461 | orchestrator | Wednesday 14 May 2025 02:24:57 +0000 (0:00:01.112) 0:01:14.288 ********* 2025-05-14 02:37:03.024471 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.024481 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.024491 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.024500 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.024510 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.024520 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.024529 | orchestrator | 2025-05-14 02:37:03.024539 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:37:03.024548 | orchestrator | Wednesday 14 May 2025 02:24:58 +0000 (0:00:00.796) 0:01:15.084 ********* 2025-05-14 02:37:03.024558 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.024567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.024577 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:37:03.024586 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:37:03.024619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.024636 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.024646 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:37:03.024655 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:37:03.024665 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.024674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:37:03.024683 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:37:03.024693 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.024702 | orchestrator | 2025-05-14 02:37:03.024712 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-14 02:37:03.024722 | orchestrator | Wednesday 14 May 2025 02:24:59 +0000 (0:00:00.955) 0:01:16.040 ********* 2025-05-14 02:37:03.024732 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.024742 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.024752 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.024762 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.024771 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.024781 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.024791 | orchestrator | 2025-05-14 02:37:03.024800 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-14 02:37:03.024810 | orchestrator | Wednesday 14 May 2025 02:24:59 +0000 (0:00:00.901) 0:01:16.942 ********* 2025-05-14 02:37:03.024820 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:37:03.024830 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:37:03.024840 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:37:03.024850 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 02:37:03.024870 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:37:03.024880 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:37:03.024890 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:37:03.024899 | orchestrator | 2025-05-14 02:37:03.024909 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-14 02:37:03.024918 | orchestrator | Wednesday 14 May 2025 02:25:00 +0000 (0:00:00.741) 0:01:17.683 ********* 2025-05-14 02:37:03.024928 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:37:03.024944 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:37:03.024954 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:37:03.024964 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 02:37:03.024973 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:37:03.024983 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:37:03.024993 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:37:03.025003 | orchestrator | 2025-05-14 02:37:03.025012 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:37:03.025021 | orchestrator | Wednesday 14 May 2025 02:25:02 +0000 (0:00:02.127) 0:01:19.811 ********* 2025-05-14 02:37:03.025032 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.025043 | orchestrator | 2025-05-14 02:37:03.025052 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:37:03.025062 | orchestrator | Wednesday 14 May 2025 02:25:04 +0000 (0:00:01.273) 0:01:21.084 ********* 2025-05-14 02:37:03.025072 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.025081 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.025091 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.025100 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.025110 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.025120 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.025130 | orchestrator | 2025-05-14 02:37:03.025145 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:37:03.025155 | orchestrator | Wednesday 14 May 2025 02:25:05 +0000 (0:00:00.987) 0:01:22.072 ********* 2025-05-14 02:37:03.025164 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.025174 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.025184 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.025194 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.025203 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.025213 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.025223 | orchestrator | 2025-05-14 02:37:03.025232 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:37:03.025242 | orchestrator | Wednesday 14 May 2025 02:25:06 +0000 (0:00:01.250) 0:01:23.323 ********* 2025-05-14 02:37:03.025252 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.025261 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.025271 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.025280 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.025290 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.025300 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.025309 | orchestrator | 2025-05-14 02:37:03.025319 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:37:03.025329 | orchestrator | Wednesday 14 May 2025 02:25:07 +0000 (0:00:01.161) 0:01:24.485 ********* 2025-05-14 02:37:03.025344 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.025354 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.025363 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.025373 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.025384 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.025393 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.025403 | orchestrator | 2025-05-14 02:37:03.025413 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:37:03.025422 | orchestrator | Wednesday 14 May 2025 02:25:08 +0000 (0:00:00.976) 0:01:25.461 ********* 2025-05-14 02:37:03.025432 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.025442 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.025451 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.025461 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.025471 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.025480 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.025490 | orchestrator | 2025-05-14 02:37:03.025499 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:37:03.025509 | orchestrator | Wednesday 14 May 2025 02:25:09 +0000 (0:00:00.948) 0:01:26.409 ********* 2025-05-14 02:37:03.025518 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.025528 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.025537 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.025547 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.025556 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.025566 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.025576 | orchestrator | 2025-05-14 02:37:03.025585 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:37:03.025646 | orchestrator | Wednesday 14 May 2025 02:25:09 +0000 (0:00:00.522) 0:01:26.932 ********* 2025-05-14 02:37:03.025658 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.025667 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.025677 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.025687 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.025696 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.025706 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.025716 | orchestrator | 2025-05-14 02:37:03.025726 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:37:03.025735 | orchestrator | Wednesday 14 May 2025 02:25:10 +0000 (0:00:00.882) 0:01:27.814 ********* 2025-05-14 02:37:03.025745 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.025755 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.025765 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.025775 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.025784 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.025794 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.025803 | orchestrator | 2025-05-14 02:37:03.025813 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:37:03.025823 | orchestrator | Wednesday 14 May 2025 02:25:11 +0000 (0:00:00.630) 0:01:28.445 ********* 2025-05-14 02:37:03.025840 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.025851 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.025861 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.025870 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.025880 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.025889 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.025899 | orchestrator | 2025-05-14 02:37:03.025909 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:37:03.025919 | orchestrator | Wednesday 14 May 2025 02:25:12 +0000 (0:00:00.792) 0:01:29.238 ********* 2025-05-14 02:37:03.025928 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.025938 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.025947 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.025957 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.025973 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.025983 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.025992 | orchestrator | 2025-05-14 02:37:03.026002 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:37:03.026012 | orchestrator | Wednesday 14 May 2025 02:25:12 +0000 (0:00:00.725) 0:01:29.964 ********* 2025-05-14 02:37:03.026235 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.026253 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.026266 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.026280 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.026293 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.026307 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.026319 | orchestrator | 2025-05-14 02:37:03.026334 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:37:03.026342 | orchestrator | Wednesday 14 May 2025 02:25:14 +0000 (0:00:01.249) 0:01:31.213 ********* 2025-05-14 02:37:03.026351 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.026359 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.026367 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.026375 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.026391 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.026399 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.026407 | orchestrator | 2025-05-14 02:37:03.026415 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:37:03.026423 | orchestrator | Wednesday 14 May 2025 02:25:14 +0000 (0:00:00.549) 0:01:31.763 ********* 2025-05-14 02:37:03.026431 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.026439 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.026447 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.026455 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.026463 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.026471 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.026479 | orchestrator | 2025-05-14 02:37:03.026487 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:37:03.026495 | orchestrator | Wednesday 14 May 2025 02:25:15 +0000 (0:00:00.892) 0:01:32.655 ********* 2025-05-14 02:37:03.026503 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.026511 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.026519 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.026528 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.026536 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.026544 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.026551 | orchestrator | 2025-05-14 02:37:03.026560 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:37:03.026568 | orchestrator | Wednesday 14 May 2025 02:25:16 +0000 (0:00:00.662) 0:01:33.317 ********* 2025-05-14 02:37:03.026575 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.026583 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.026700 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.026712 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.026720 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.026728 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.026736 | orchestrator | 2025-05-14 02:37:03.026744 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:37:03.026752 | orchestrator | Wednesday 14 May 2025 02:25:17 +0000 (0:00:00.805) 0:01:34.122 ********* 2025-05-14 02:37:03.026760 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.026768 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.026776 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.026784 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.026792 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.026800 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.026808 | orchestrator | 2025-05-14 02:37:03.026816 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:37:03.026836 | orchestrator | Wednesday 14 May 2025 02:25:17 +0000 (0:00:00.610) 0:01:34.732 ********* 2025-05-14 02:37:03.026844 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.026852 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.026860 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.026868 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.026875 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.026883 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.026891 | orchestrator | 2025-05-14 02:37:03.026899 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:37:03.026907 | orchestrator | Wednesday 14 May 2025 02:25:18 +0000 (0:00:00.936) 0:01:35.669 ********* 2025-05-14 02:37:03.026915 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.026923 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.026931 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.026938 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.026946 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.026954 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.026962 | orchestrator | 2025-05-14 02:37:03.026970 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:37:03.026978 | orchestrator | Wednesday 14 May 2025 02:25:19 +0000 (0:00:00.687) 0:01:36.356 ********* 2025-05-14 02:37:03.026987 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.026994 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.027002 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.027010 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.027018 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.027026 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.027034 | orchestrator | 2025-05-14 02:37:03.027042 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:37:03.027143 | orchestrator | Wednesday 14 May 2025 02:25:20 +0000 (0:00:01.086) 0:01:37.442 ********* 2025-05-14 02:37:03.027157 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.027165 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.027173 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.027181 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.027189 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.027197 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.027205 | orchestrator | 2025-05-14 02:37:03.027213 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:37:03.027221 | orchestrator | Wednesday 14 May 2025 02:25:21 +0000 (0:00:00.933) 0:01:38.376 ********* 2025-05-14 02:37:03.027230 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.027238 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.027246 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.027254 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.027263 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.027270 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.027279 | orchestrator | 2025-05-14 02:37:03.027287 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:37:03.027295 | orchestrator | Wednesday 14 May 2025 02:25:22 +0000 (0:00:01.158) 0:01:39.535 ********* 2025-05-14 02:37:03.027303 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.027311 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.027319 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.027327 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.027335 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.027343 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.027351 | orchestrator | 2025-05-14 02:37:03.027359 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:37:03.027367 | orchestrator | Wednesday 14 May 2025 02:25:23 +0000 (0:00:00.713) 0:01:40.248 ********* 2025-05-14 02:37:03.027375 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.027383 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.027405 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.027413 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.027421 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.027429 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.027437 | orchestrator | 2025-05-14 02:37:03.027445 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:37:03.027453 | orchestrator | Wednesday 14 May 2025 02:25:24 +0000 (0:00:01.009) 0:01:41.257 ********* 2025-05-14 02:37:03.027461 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.027469 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.027477 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.027485 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.027493 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.027501 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.027509 | orchestrator | 2025-05-14 02:37:03.027517 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:37:03.027525 | orchestrator | Wednesday 14 May 2025 02:25:24 +0000 (0:00:00.689) 0:01:41.947 ********* 2025-05-14 02:37:03.027533 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.027541 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.027549 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.027557 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.027565 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.027573 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.027581 | orchestrator | 2025-05-14 02:37:03.027589 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:37:03.027619 | orchestrator | Wednesday 14 May 2025 02:25:26 +0000 (0:00:01.066) 0:01:43.014 ********* 2025-05-14 02:37:03.027628 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.027635 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.027644 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.027651 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.027659 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.027667 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.027675 | orchestrator | 2025-05-14 02:37:03.027683 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:37:03.027691 | orchestrator | Wednesday 14 May 2025 02:25:26 +0000 (0:00:00.875) 0:01:43.889 ********* 2025-05-14 02:37:03.027699 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.027707 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.027715 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.027723 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.027731 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.027738 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.027746 | orchestrator | 2025-05-14 02:37:03.027755 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:37:03.027763 | orchestrator | Wednesday 14 May 2025 02:25:27 +0000 (0:00:01.019) 0:01:44.909 ********* 2025-05-14 02:37:03.027771 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.027779 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.027787 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.027794 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.027802 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.027810 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.027818 | orchestrator | 2025-05-14 02:37:03.027826 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:37:03.027834 | orchestrator | Wednesday 14 May 2025 02:25:28 +0000 (0:00:00.755) 0:01:45.664 ********* 2025-05-14 02:37:03.027843 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.027850 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.027858 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.027866 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.027879 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.027887 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.027895 | orchestrator | 2025-05-14 02:37:03.027903 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:37:03.027911 | orchestrator | Wednesday 14 May 2025 02:25:29 +0000 (0:00:01.261) 0:01:46.925 ********* 2025-05-14 02:37:03.027919 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.027927 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.027995 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.028007 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.028015 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.028023 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.028031 | orchestrator | 2025-05-14 02:37:03.028039 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:37:03.028047 | orchestrator | Wednesday 14 May 2025 02:25:30 +0000 (0:00:00.778) 0:01:47.704 ********* 2025-05-14 02:37:03.028055 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.028063 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.028071 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.028079 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.028087 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.028095 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.028103 | orchestrator | 2025-05-14 02:37:03.028111 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:37:03.028120 | orchestrator | Wednesday 14 May 2025 02:25:31 +0000 (0:00:00.907) 0:01:48.612 ********* 2025-05-14 02:37:03.028128 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.028135 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.028143 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.028151 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.028159 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.028167 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.028175 | orchestrator | 2025-05-14 02:37:03.028183 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:37:03.028192 | orchestrator | Wednesday 14 May 2025 02:25:32 +0000 (0:00:00.683) 0:01:49.295 ********* 2025-05-14 02:37:03.028200 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:37:03.028208 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:37:03.028216 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.028229 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:37:03.028237 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:37:03.028245 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.028252 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:37:03.028261 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:37:03.028268 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.028276 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.028284 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.028292 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.028300 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.028308 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.028316 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.028324 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.028332 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.028340 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.028348 | orchestrator | 2025-05-14 02:37:03.028356 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:37:03.028364 | orchestrator | Wednesday 14 May 2025 02:25:33 +0000 (0:00:01.356) 0:01:50.652 ********* 2025-05-14 02:37:03.028372 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 02:37:03.028391 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 02:37:03.028405 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.028419 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 02:37:03.028444 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 02:37:03.028457 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.028469 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 02:37:03.028482 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 02:37:03.028494 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.028505 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 02:37:03.028516 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 02:37:03.028528 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.028541 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 02:37:03.028552 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 02:37:03.028565 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.028577 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 02:37:03.028645 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 02:37:03.028664 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.028674 | orchestrator | 2025-05-14 02:37:03.028684 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:37:03.028693 | orchestrator | Wednesday 14 May 2025 02:25:34 +0000 (0:00:01.141) 0:01:51.793 ********* 2025-05-14 02:37:03.028703 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.028712 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.028722 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.028731 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.028740 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.028750 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.028760 | orchestrator | 2025-05-14 02:37:03.028769 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:37:03.028779 | orchestrator | Wednesday 14 May 2025 02:25:36 +0000 (0:00:01.414) 0:01:53.208 ********* 2025-05-14 02:37:03.028787 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.028795 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.028803 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.028811 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.028819 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.028827 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.028834 | orchestrator | 2025-05-14 02:37:03.028842 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:37:03.028930 | orchestrator | Wednesday 14 May 2025 02:25:36 +0000 (0:00:00.636) 0:01:53.845 ********* 2025-05-14 02:37:03.028942 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.028950 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.028959 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.028967 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.028975 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.028983 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.028991 | orchestrator | 2025-05-14 02:37:03.028999 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:37:03.029007 | orchestrator | Wednesday 14 May 2025 02:25:37 +0000 (0:00:00.803) 0:01:54.648 ********* 2025-05-14 02:37:03.029015 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029023 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.029031 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.029039 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.029047 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.029054 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.029068 | orchestrator | 2025-05-14 02:37:03.029075 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:37:03.029082 | orchestrator | Wednesday 14 May 2025 02:25:38 +0000 (0:00:00.552) 0:01:55.200 ********* 2025-05-14 02:37:03.029089 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029096 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.029103 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.029109 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.029116 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.029123 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.029129 | orchestrator | 2025-05-14 02:37:03.029136 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:37:03.029143 | orchestrator | Wednesday 14 May 2025 02:25:39 +0000 (0:00:00.885) 0:01:56.085 ********* 2025-05-14 02:37:03.029150 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029162 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.029169 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.029176 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.029182 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.029189 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.029196 | orchestrator | 2025-05-14 02:37:03.029203 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:37:03.029210 | orchestrator | Wednesday 14 May 2025 02:25:39 +0000 (0:00:00.648) 0:01:56.734 ********* 2025-05-14 02:37:03.029216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.029223 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.029230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.029237 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029244 | orchestrator | 2025-05-14 02:37:03.029251 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:37:03.029257 | orchestrator | Wednesday 14 May 2025 02:25:40 +0000 (0:00:00.863) 0:01:57.597 ********* 2025-05-14 02:37:03.029264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.029271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.029278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.029284 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029291 | orchestrator | 2025-05-14 02:37:03.029298 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:37:03.029305 | orchestrator | Wednesday 14 May 2025 02:25:41 +0000 (0:00:00.422) 0:01:58.020 ********* 2025-05-14 02:37:03.029312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.029319 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.029326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.029332 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029339 | orchestrator | 2025-05-14 02:37:03.029346 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.029353 | orchestrator | Wednesday 14 May 2025 02:25:41 +0000 (0:00:00.424) 0:01:58.445 ********* 2025-05-14 02:37:03.029360 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029366 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.029373 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.029379 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.029386 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.029392 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.029399 | orchestrator | 2025-05-14 02:37:03.029406 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:37:03.029412 | orchestrator | Wednesday 14 May 2025 02:25:42 +0000 (0:00:00.617) 0:01:59.063 ********* 2025-05-14 02:37:03.029419 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:37:03.029430 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029437 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:37:03.029444 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.029450 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:37:03.029457 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.029463 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.029470 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.029476 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.029483 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.029490 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.029496 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.029503 | orchestrator | 2025-05-14 02:37:03.029509 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:37:03.029516 | orchestrator | Wednesday 14 May 2025 02:25:43 +0000 (0:00:01.301) 0:02:00.364 ********* 2025-05-14 02:37:03.029523 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029529 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.029536 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.029543 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.029551 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.029559 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.029567 | orchestrator | 2025-05-14 02:37:03.029648 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.029660 | orchestrator | Wednesday 14 May 2025 02:25:43 +0000 (0:00:00.601) 0:02:00.965 ********* 2025-05-14 02:37:03.029669 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029677 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.029685 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.029693 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.029701 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.029709 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.029716 | orchestrator | 2025-05-14 02:37:03.029725 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:37:03.029733 | orchestrator | Wednesday 14 May 2025 02:25:44 +0000 (0:00:00.703) 0:02:01.669 ********* 2025-05-14 02:37:03.029741 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:37:03.029749 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029758 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:37:03.029766 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.029774 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:37:03.029782 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.029790 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.029798 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.029806 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.029814 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.029822 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.029830 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.029838 | orchestrator | 2025-05-14 02:37:03.029846 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:37:03.029855 | orchestrator | Wednesday 14 May 2025 02:25:45 +0000 (0:00:00.809) 0:02:02.478 ********* 2025-05-14 02:37:03.029867 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029875 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.029881 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.029888 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.029895 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.029902 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.029918 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.029925 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.029932 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.029939 | orchestrator | 2025-05-14 02:37:03.029946 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:37:03.029952 | orchestrator | Wednesday 14 May 2025 02:25:46 +0000 (0:00:00.674) 0:02:03.153 ********* 2025-05-14 02:37:03.029959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.029966 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.029973 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.029979 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.029986 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:37:03.029993 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:37:03.030000 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:37:03.030006 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.030013 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:37:03.030045 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:37:03.030052 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:37:03.030059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.030066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.030073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.030080 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.030087 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:37:03.030093 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:37:03.030100 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.030106 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:37:03.030113 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.030120 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:37:03.030127 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:37:03.030133 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:37:03.030140 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.030146 | orchestrator | 2025-05-14 02:37:03.030153 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:37:03.030160 | orchestrator | Wednesday 14 May 2025 02:25:47 +0000 (0:00:01.403) 0:02:04.557 ********* 2025-05-14 02:37:03.030166 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.030173 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.030179 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.030186 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.030193 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.030199 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.030206 | orchestrator | 2025-05-14 02:37:03.030212 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:37:03.030219 | orchestrator | Wednesday 14 May 2025 02:25:48 +0000 (0:00:01.057) 0:02:05.614 ********* 2025-05-14 02:37:03.030226 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.030233 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.030293 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.030303 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.030310 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.030317 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:37:03.030324 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.030331 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:37:03.030343 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.030350 | orchestrator | 2025-05-14 02:37:03.030357 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:37:03.030364 | orchestrator | Wednesday 14 May 2025 02:25:49 +0000 (0:00:01.167) 0:02:06.782 ********* 2025-05-14 02:37:03.030371 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.030377 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.030384 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.030391 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.030397 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.030404 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.030411 | orchestrator | 2025-05-14 02:37:03.030418 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:37:03.030424 | orchestrator | Wednesday 14 May 2025 02:25:50 +0000 (0:00:01.137) 0:02:07.919 ********* 2025-05-14 02:37:03.030431 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.030438 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.030445 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.030451 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.030458 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.030465 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.030472 | orchestrator | 2025-05-14 02:37:03.030478 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-14 02:37:03.030485 | orchestrator | Wednesday 14 May 2025 02:25:52 +0000 (0:00:01.124) 0:02:09.044 ********* 2025-05-14 02:37:03.030492 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.030503 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.030510 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.030516 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.030523 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.030530 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.030537 | orchestrator | 2025-05-14 02:37:03.030543 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-14 02:37:03.030550 | orchestrator | Wednesday 14 May 2025 02:25:53 +0000 (0:00:01.381) 0:02:10.425 ********* 2025-05-14 02:37:03.030557 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.030564 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.030570 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.030577 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.030584 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.030591 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.030618 | orchestrator | 2025-05-14 02:37:03.030625 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-14 02:37:03.030632 | orchestrator | Wednesday 14 May 2025 02:25:55 +0000 (0:00:02.213) 0:02:12.639 ********* 2025-05-14 02:37:03.030639 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.030647 | orchestrator | 2025-05-14 02:37:03.030654 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-14 02:37:03.030661 | orchestrator | Wednesday 14 May 2025 02:25:56 +0000 (0:00:01.322) 0:02:13.962 ********* 2025-05-14 02:37:03.030667 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.030674 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.030681 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.030687 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.030694 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.030701 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.030707 | orchestrator | 2025-05-14 02:37:03.030714 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-14 02:37:03.030721 | orchestrator | Wednesday 14 May 2025 02:25:57 +0000 (0:00:00.619) 0:02:14.581 ********* 2025-05-14 02:37:03.030728 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.030741 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.030747 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.030754 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.030761 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.030767 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.030774 | orchestrator | 2025-05-14 02:37:03.030780 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-14 02:37:03.030787 | orchestrator | Wednesday 14 May 2025 02:25:58 +0000 (0:00:00.750) 0:02:15.332 ********* 2025-05-14 02:37:03.030794 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:37:03.030801 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:37:03.030808 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:37:03.030814 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:37:03.030821 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:37:03.030827 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:37:03.030834 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:37:03.030841 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:37:03.030848 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:37:03.030854 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:37:03.030907 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:37:03.030916 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:37:03.030923 | orchestrator | 2025-05-14 02:37:03.030930 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-14 02:37:03.030936 | orchestrator | Wednesday 14 May 2025 02:25:59 +0000 (0:00:01.350) 0:02:16.683 ********* 2025-05-14 02:37:03.030943 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.030950 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.030956 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.030963 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.030970 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.030976 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.030983 | orchestrator | 2025-05-14 02:37:03.030990 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-14 02:37:03.030997 | orchestrator | Wednesday 14 May 2025 02:26:00 +0000 (0:00:00.903) 0:02:17.586 ********* 2025-05-14 02:37:03.031004 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.031011 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.031017 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.031024 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.031031 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.031038 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.031044 | orchestrator | 2025-05-14 02:37:03.031051 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-14 02:37:03.031058 | orchestrator | Wednesday 14 May 2025 02:26:01 +0000 (0:00:00.742) 0:02:18.329 ********* 2025-05-14 02:37:03.031065 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.031072 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.031078 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.031085 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.031092 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.031098 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.031105 | orchestrator | 2025-05-14 02:37:03.031116 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-14 02:37:03.031129 | orchestrator | Wednesday 14 May 2025 02:26:01 +0000 (0:00:00.619) 0:02:18.948 ********* 2025-05-14 02:37:03.031136 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.031143 | orchestrator | 2025-05-14 02:37:03.031149 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-14 02:37:03.031156 | orchestrator | Wednesday 14 May 2025 02:26:03 +0000 (0:00:01.152) 0:02:20.101 ********* 2025-05-14 02:37:03.031163 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.031170 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.031177 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.031183 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.031190 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.031197 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.031204 | orchestrator | 2025-05-14 02:37:03.031211 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-14 02:37:03.031280 | orchestrator | Wednesday 14 May 2025 02:26:50 +0000 (0:00:47.146) 0:03:07.248 ********* 2025-05-14 02:37:03.031287 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:37:03.031294 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:37:03.031301 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:37:03.031308 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.031315 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:37:03.031322 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:37:03.031329 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:37:03.031336 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.031343 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:37:03.031350 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:37:03.031356 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:37:03.031363 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.031370 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:37:03.031377 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:37:03.031384 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:37:03.031390 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.031397 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:37:03.031404 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:37:03.031411 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:37:03.031418 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.031425 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:37:03.031432 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:37:03.031438 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:37:03.031445 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.031452 | orchestrator | 2025-05-14 02:37:03.031458 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-14 02:37:03.031465 | orchestrator | Wednesday 14 May 2025 02:26:51 +0000 (0:00:00.957) 0:03:08.205 ********* 2025-05-14 02:37:03.031471 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.031538 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.031548 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.031563 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.031570 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.031576 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.031583 | orchestrator | 2025-05-14 02:37:03.031589 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-14 02:37:03.031615 | orchestrator | Wednesday 14 May 2025 02:26:51 +0000 (0:00:00.711) 0:03:08.917 ********* 2025-05-14 02:37:03.031622 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.031629 | orchestrator | 2025-05-14 02:37:03.031636 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-14 02:37:03.031643 | orchestrator | Wednesday 14 May 2025 02:26:52 +0000 (0:00:00.146) 0:03:09.064 ********* 2025-05-14 02:37:03.031650 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.031656 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.031663 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.031670 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.031676 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.031683 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.031689 | orchestrator | 2025-05-14 02:37:03.031696 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-14 02:37:03.031703 | orchestrator | Wednesday 14 May 2025 02:26:53 +0000 (0:00:00.968) 0:03:10.033 ********* 2025-05-14 02:37:03.031710 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.031717 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.031723 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.031730 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.031737 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.031744 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.031750 | orchestrator | 2025-05-14 02:37:03.031757 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-14 02:37:03.031769 | orchestrator | Wednesday 14 May 2025 02:26:53 +0000 (0:00:00.772) 0:03:10.805 ********* 2025-05-14 02:37:03.031775 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.031782 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.031789 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.031796 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.031802 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.031809 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.031816 | orchestrator | 2025-05-14 02:37:03.031823 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-14 02:37:03.031830 | orchestrator | Wednesday 14 May 2025 02:26:54 +0000 (0:00:00.893) 0:03:11.699 ********* 2025-05-14 02:37:03.031837 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.031844 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.031850 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.031857 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.031864 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.031870 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.031877 | orchestrator | 2025-05-14 02:37:03.031884 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-14 02:37:03.031891 | orchestrator | Wednesday 14 May 2025 02:26:56 +0000 (0:00:01.528) 0:03:13.227 ********* 2025-05-14 02:37:03.031899 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.031906 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.031912 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.031919 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.031926 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.031933 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.031939 | orchestrator | 2025-05-14 02:37:03.031946 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-14 02:37:03.031953 | orchestrator | Wednesday 14 May 2025 02:26:56 +0000 (0:00:00.766) 0:03:13.994 ********* 2025-05-14 02:37:03.031960 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.031977 | orchestrator | 2025-05-14 02:37:03.031984 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-14 02:37:03.031991 | orchestrator | Wednesday 14 May 2025 02:26:58 +0000 (0:00:01.187) 0:03:15.182 ********* 2025-05-14 02:37:03.031998 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.032005 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.032011 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.032018 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.032025 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.032037 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.032048 | orchestrator | 2025-05-14 02:37:03.032059 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-14 02:37:03.032069 | orchestrator | Wednesday 14 May 2025 02:26:58 +0000 (0:00:00.673) 0:03:15.855 ********* 2025-05-14 02:37:03.032080 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.032091 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.032102 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.032112 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.032122 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.032133 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.032143 | orchestrator | 2025-05-14 02:37:03.032155 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-14 02:37:03.032167 | orchestrator | Wednesday 14 May 2025 02:26:59 +0000 (0:00:01.065) 0:03:16.921 ********* 2025-05-14 02:37:03.032179 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.032190 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.032201 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.032211 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.032219 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.032227 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.032235 | orchestrator | 2025-05-14 02:37:03.032243 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-14 02:37:03.032251 | orchestrator | Wednesday 14 May 2025 02:27:00 +0000 (0:00:00.911) 0:03:17.832 ********* 2025-05-14 02:37:03.032259 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.032267 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.032276 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.032284 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.032350 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.032360 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.032368 | orchestrator | 2025-05-14 02:37:03.032376 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-14 02:37:03.032385 | orchestrator | Wednesday 14 May 2025 02:27:01 +0000 (0:00:00.737) 0:03:18.569 ********* 2025-05-14 02:37:03.032393 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.032402 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.032410 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.032418 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.032426 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.032434 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.032442 | orchestrator | 2025-05-14 02:37:03.032450 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-14 02:37:03.032458 | orchestrator | Wednesday 14 May 2025 02:27:02 +0000 (0:00:00.505) 0:03:19.075 ********* 2025-05-14 02:37:03.032466 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.032475 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.032483 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.032491 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.032499 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.032507 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.032516 | orchestrator | 2025-05-14 02:37:03.032524 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-14 02:37:03.032538 | orchestrator | Wednesday 14 May 2025 02:27:02 +0000 (0:00:00.699) 0:03:19.774 ********* 2025-05-14 02:37:03.032545 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.032552 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.032558 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.032565 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.032572 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.032579 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.032586 | orchestrator | 2025-05-14 02:37:03.032616 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-14 02:37:03.032624 | orchestrator | Wednesday 14 May 2025 02:27:03 +0000 (0:00:00.559) 0:03:20.333 ********* 2025-05-14 02:37:03.032631 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.032638 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.032644 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.032651 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.032658 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.032664 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.032671 | orchestrator | 2025-05-14 02:37:03.032678 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:37:03.032684 | orchestrator | Wednesday 14 May 2025 02:27:04 +0000 (0:00:01.125) 0:03:21.458 ********* 2025-05-14 02:37:03.032691 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.032698 | orchestrator | 2025-05-14 02:37:03.032705 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-14 02:37:03.032712 | orchestrator | Wednesday 14 May 2025 02:27:05 +0000 (0:00:01.109) 0:03:22.568 ********* 2025-05-14 02:37:03.032718 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-14 02:37:03.032725 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-14 02:37:03.032732 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-14 02:37:03.032738 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-14 02:37:03.032745 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-14 02:37:03.032751 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-14 02:37:03.032758 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-14 02:37:03.032765 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-14 02:37:03.032771 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-14 02:37:03.032778 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-14 02:37:03.032784 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-14 02:37:03.032791 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-14 02:37:03.032798 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-14 02:37:03.032804 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-14 02:37:03.032811 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-14 02:37:03.032818 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-14 02:37:03.032824 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-14 02:37:03.032831 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-14 02:37:03.032838 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-14 02:37:03.032844 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-14 02:37:03.032851 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-14 02:37:03.032857 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-14 02:37:03.032864 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-14 02:37:03.032871 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-14 02:37:03.032877 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-14 02:37:03.032888 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-14 02:37:03.032895 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-14 02:37:03.032902 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-14 02:37:03.032908 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-14 02:37:03.032915 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-14 02:37:03.032922 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-14 02:37:03.032977 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:37:03.032987 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-14 02:37:03.032994 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-14 02:37:03.033000 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-14 02:37:03.033007 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:37:03.033014 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-14 02:37:03.033021 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-14 02:37:03.033027 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:37:03.033034 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:37:03.033041 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:37:03.033047 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:37:03.033054 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:37:03.033060 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:37:03.033067 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:37:03.033074 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:37:03.033080 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:37:03.033087 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:37:03.033094 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:37:03.033100 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:37:03.033111 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:37:03.033118 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:37:03.033125 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:37:03.033132 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:37:03.033139 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:37:03.033145 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:37:03.033152 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:37:03.033159 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:37:03.033166 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:37:03.033173 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:37:03.033179 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:37:03.033186 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:37:03.033193 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:37:03.033200 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:37:03.033206 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:37:03.033213 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:37:03.033225 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:37:03.033232 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:37:03.033238 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:37:03.033245 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:37:03.033252 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:37:03.033259 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:37:03.033266 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:37:03.033273 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:37:03.033279 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:37:03.033286 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-14 02:37:03.033293 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:37:03.033300 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:37:03.033306 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-14 02:37:03.033313 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:37:03.033320 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-14 02:37:03.033327 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-14 02:37:03.033333 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-14 02:37:03.033340 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-14 02:37:03.033347 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-14 02:37:03.033354 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-14 02:37:03.033361 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-14 02:37:03.033367 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-14 02:37:03.033374 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-14 02:37:03.033427 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-14 02:37:03.033436 | orchestrator | 2025-05-14 02:37:03.033443 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:37:03.033450 | orchestrator | Wednesday 14 May 2025 02:27:11 +0000 (0:00:06.252) 0:03:28.821 ********* 2025-05-14 02:37:03.033457 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.033464 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.033470 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.033478 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.033485 | orchestrator | 2025-05-14 02:37:03.033491 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-14 02:37:03.033498 | orchestrator | Wednesday 14 May 2025 02:27:13 +0000 (0:00:01.515) 0:03:30.336 ********* 2025-05-14 02:37:03.033505 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-14 02:37:03.033513 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-14 02:37:03.033520 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-14 02:37:03.033526 | orchestrator | 2025-05-14 02:37:03.033533 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-14 02:37:03.033540 | orchestrator | Wednesday 14 May 2025 02:27:14 +0000 (0:00:01.220) 0:03:31.557 ********* 2025-05-14 02:37:03.033551 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-14 02:37:03.033566 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-14 02:37:03.033573 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-14 02:37:03.033580 | orchestrator | 2025-05-14 02:37:03.033586 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:37:03.033639 | orchestrator | Wednesday 14 May 2025 02:27:15 +0000 (0:00:01.325) 0:03:32.883 ********* 2025-05-14 02:37:03.033648 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.033655 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.033661 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.033668 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.033675 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.033681 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.033688 | orchestrator | 2025-05-14 02:37:03.033695 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:37:03.033701 | orchestrator | Wednesday 14 May 2025 02:27:16 +0000 (0:00:00.960) 0:03:33.844 ********* 2025-05-14 02:37:03.033708 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.033715 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.033721 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.033728 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.033735 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.033741 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.033748 | orchestrator | 2025-05-14 02:37:03.033755 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:37:03.033761 | orchestrator | Wednesday 14 May 2025 02:27:17 +0000 (0:00:00.696) 0:03:34.541 ********* 2025-05-14 02:37:03.033768 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.033775 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.033781 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.033788 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.033794 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.033801 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.033808 | orchestrator | 2025-05-14 02:37:03.033815 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:37:03.033822 | orchestrator | Wednesday 14 May 2025 02:27:18 +0000 (0:00:00.924) 0:03:35.465 ********* 2025-05-14 02:37:03.033828 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.033835 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.033842 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.033848 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.033855 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.033862 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.033868 | orchestrator | 2025-05-14 02:37:03.033875 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:37:03.033882 | orchestrator | Wednesday 14 May 2025 02:27:19 +0000 (0:00:00.657) 0:03:36.123 ********* 2025-05-14 02:37:03.033889 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.033895 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.033902 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.033908 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.033915 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.033922 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.033928 | orchestrator | 2025-05-14 02:37:03.033935 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:37:03.033942 | orchestrator | Wednesday 14 May 2025 02:27:20 +0000 (0:00:00.896) 0:03:37.019 ********* 2025-05-14 02:37:03.033948 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.033955 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.033967 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.033974 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.033980 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.033987 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.033993 | orchestrator | 2025-05-14 02:37:03.033999 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:37:03.034084 | orchestrator | Wednesday 14 May 2025 02:27:20 +0000 (0:00:00.740) 0:03:37.759 ********* 2025-05-14 02:37:03.034095 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034101 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034108 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034114 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.034120 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.034126 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.034133 | orchestrator | 2025-05-14 02:37:03.034139 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:37:03.034145 | orchestrator | Wednesday 14 May 2025 02:27:21 +0000 (0:00:00.726) 0:03:38.486 ********* 2025-05-14 02:37:03.034152 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034158 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034164 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034170 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.034177 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.034183 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.034189 | orchestrator | 2025-05-14 02:37:03.034196 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:37:03.034202 | orchestrator | Wednesday 14 May 2025 02:27:22 +0000 (0:00:00.595) 0:03:39.082 ********* 2025-05-14 02:37:03.034209 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034215 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034221 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034228 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.034234 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.034240 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.034247 | orchestrator | 2025-05-14 02:37:03.034253 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:37:03.034260 | orchestrator | Wednesday 14 May 2025 02:27:24 +0000 (0:00:02.303) 0:03:41.385 ********* 2025-05-14 02:37:03.034270 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034277 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034283 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034289 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.034295 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.034301 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.034308 | orchestrator | 2025-05-14 02:37:03.034314 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:37:03.034320 | orchestrator | Wednesday 14 May 2025 02:27:25 +0000 (0:00:00.681) 0:03:42.067 ********* 2025-05-14 02:37:03.034327 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:37:03.034333 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:37:03.034339 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034345 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:37:03.034351 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:37:03.034357 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034364 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:37:03.034370 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:37:03.034376 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034382 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.034388 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.034394 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.034400 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.034411 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.034418 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.034424 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.034430 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.034436 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.034443 | orchestrator | 2025-05-14 02:37:03.034449 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:37:03.034456 | orchestrator | Wednesday 14 May 2025 02:27:25 +0000 (0:00:00.756) 0:03:42.823 ********* 2025-05-14 02:37:03.034462 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 02:37:03.034469 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 02:37:03.034475 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034481 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 02:37:03.034488 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 02:37:03.034494 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034501 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 02:37:03.034507 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 02:37:03.034513 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034519 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-14 02:37:03.034526 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-14 02:37:03.034532 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-14 02:37:03.034538 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-14 02:37:03.034544 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-14 02:37:03.034551 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-14 02:37:03.034557 | orchestrator | 2025-05-14 02:37:03.034563 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:37:03.034570 | orchestrator | Wednesday 14 May 2025 02:27:26 +0000 (0:00:00.618) 0:03:43.442 ********* 2025-05-14 02:37:03.034576 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034582 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034589 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034615 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.034625 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.034636 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.034646 | orchestrator | 2025-05-14 02:37:03.034654 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:37:03.034665 | orchestrator | Wednesday 14 May 2025 02:27:27 +0000 (0:00:00.847) 0:03:44.289 ********* 2025-05-14 02:37:03.034673 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034729 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034739 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034745 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.034751 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.034757 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.034763 | orchestrator | 2025-05-14 02:37:03.034770 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:37:03.034776 | orchestrator | Wednesday 14 May 2025 02:27:27 +0000 (0:00:00.570) 0:03:44.860 ********* 2025-05-14 02:37:03.034782 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034789 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034795 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034801 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.034808 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.034814 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.034820 | orchestrator | 2025-05-14 02:37:03.034826 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:37:03.034840 | orchestrator | Wednesday 14 May 2025 02:27:28 +0000 (0:00:00.826) 0:03:45.687 ********* 2025-05-14 02:37:03.034846 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034853 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034859 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034865 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.034872 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.034878 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.034884 | orchestrator | 2025-05-14 02:37:03.034890 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:37:03.034897 | orchestrator | Wednesday 14 May 2025 02:27:29 +0000 (0:00:00.690) 0:03:46.377 ********* 2025-05-14 02:37:03.034903 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034909 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034919 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034925 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.034931 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.034938 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.034944 | orchestrator | 2025-05-14 02:37:03.034950 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:37:03.034956 | orchestrator | Wednesday 14 May 2025 02:27:30 +0000 (0:00:00.824) 0:03:47.201 ********* 2025-05-14 02:37:03.034962 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.034968 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.034975 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.034981 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.034987 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.034993 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.034999 | orchestrator | 2025-05-14 02:37:03.035006 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:37:03.035012 | orchestrator | Wednesday 14 May 2025 02:27:31 +0000 (0:00:00.851) 0:03:48.053 ********* 2025-05-14 02:37:03.035018 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.035024 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.035030 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.035036 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.035042 | orchestrator | 2025-05-14 02:37:03.035048 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:37:03.035055 | orchestrator | Wednesday 14 May 2025 02:27:31 +0000 (0:00:00.805) 0:03:48.859 ********* 2025-05-14 02:37:03.035061 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.035067 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.035073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.035080 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.035086 | orchestrator | 2025-05-14 02:37:03.035092 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:37:03.035099 | orchestrator | Wednesday 14 May 2025 02:27:32 +0000 (0:00:01.039) 0:03:49.898 ********* 2025-05-14 02:37:03.035105 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.035111 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.035118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.035124 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.035130 | orchestrator | 2025-05-14 02:37:03.035137 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.035143 | orchestrator | Wednesday 14 May 2025 02:27:33 +0000 (0:00:00.431) 0:03:50.330 ********* 2025-05-14 02:37:03.035150 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.035156 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.035162 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.035169 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.035180 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.035186 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.035192 | orchestrator | 2025-05-14 02:37:03.035198 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:37:03.035205 | orchestrator | Wednesday 14 May 2025 02:27:34 +0000 (0:00:00.771) 0:03:51.102 ********* 2025-05-14 02:37:03.035211 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:37:03.035217 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.035224 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:37:03.035230 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.035237 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:37:03.035243 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.035249 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-14 02:37:03.035255 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-14 02:37:03.035262 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 02:37:03.035268 | orchestrator | 2025-05-14 02:37:03.035274 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:37:03.035281 | orchestrator | Wednesday 14 May 2025 02:27:35 +0000 (0:00:01.323) 0:03:52.425 ********* 2025-05-14 02:37:03.035287 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.035337 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.035346 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.035352 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.035359 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.035365 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.035371 | orchestrator | 2025-05-14 02:37:03.035378 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.035384 | orchestrator | Wednesday 14 May 2025 02:27:36 +0000 (0:00:00.669) 0:03:53.095 ********* 2025-05-14 02:37:03.035391 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.035397 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.035403 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.035409 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.035416 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.035422 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.035429 | orchestrator | 2025-05-14 02:37:03.035435 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:37:03.035441 | orchestrator | Wednesday 14 May 2025 02:27:37 +0000 (0:00:01.042) 0:03:54.137 ********* 2025-05-14 02:37:03.035448 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:37:03.035454 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.035460 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:37:03.035467 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.035473 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:37:03.035479 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.035486 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.035492 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.035498 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.035504 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.035510 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.035520 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.035526 | orchestrator | 2025-05-14 02:37:03.035532 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:37:03.035538 | orchestrator | Wednesday 14 May 2025 02:27:38 +0000 (0:00:00.899) 0:03:55.037 ********* 2025-05-14 02:37:03.035545 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.035551 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.035557 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.035563 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.035574 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.035581 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.035587 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.035613 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.035620 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.035626 | orchestrator | 2025-05-14 02:37:03.035633 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:37:03.035639 | orchestrator | Wednesday 14 May 2025 02:27:38 +0000 (0:00:00.945) 0:03:55.982 ********* 2025-05-14 02:37:03.035645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.035651 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.035657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.035664 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.035670 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:37:03.035676 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:37:03.035682 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:37:03.035689 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.035695 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:37:03.035701 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:37:03.035707 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:37:03.035714 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.035720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.035726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.035732 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:37:03.035739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.035745 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.035751 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:37:03.035757 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:37:03.035764 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:37:03.035770 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:37:03.035776 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.035783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:37:03.035789 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.035795 | orchestrator | 2025-05-14 02:37:03.035801 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:37:03.035808 | orchestrator | Wednesday 14 May 2025 02:27:41 +0000 (0:00:02.230) 0:03:58.213 ********* 2025-05-14 02:37:03.035814 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.035821 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.035827 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.035834 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.035840 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.035846 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.035853 | orchestrator | 2025-05-14 02:37:03.035906 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:37:03.035914 | orchestrator | Wednesday 14 May 2025 02:27:46 +0000 (0:00:04.805) 0:04:03.018 ********* 2025-05-14 02:37:03.035921 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.035927 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.035933 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.035939 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.035946 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.035957 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.035963 | orchestrator | 2025-05-14 02:37:03.035970 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-14 02:37:03.035976 | orchestrator | Wednesday 14 May 2025 02:27:47 +0000 (0:00:01.174) 0:04:04.193 ********* 2025-05-14 02:37:03.035982 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.035988 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.035994 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.036001 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.036007 | orchestrator | 2025-05-14 02:37:03.036013 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-14 02:37:03.036019 | orchestrator | Wednesday 14 May 2025 02:27:48 +0000 (0:00:01.172) 0:04:05.366 ********* 2025-05-14 02:37:03.036026 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.036032 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.036038 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.036045 | orchestrator | 2025-05-14 02:37:03.036051 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-14 02:37:03.036057 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.036063 | orchestrator | 2025-05-14 02:37:03.036069 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-14 02:37:03.036080 | orchestrator | Wednesday 14 May 2025 02:27:49 +0000 (0:00:01.198) 0:04:06.564 ********* 2025-05-14 02:37:03.036086 | orchestrator | 2025-05-14 02:37:03.036092 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-14 02:37:03.036098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.036104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.036111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.036117 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036123 | orchestrator | 2025-05-14 02:37:03.036129 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-14 02:37:03.036136 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.036142 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.036148 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.036154 | orchestrator | 2025-05-14 02:37:03.036160 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-14 02:37:03.036166 | orchestrator | Wednesday 14 May 2025 02:27:50 +0000 (0:00:01.281) 0:04:07.846 ********* 2025-05-14 02:37:03.036173 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:37:03.036179 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:37:03.036185 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:37:03.036191 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.036197 | orchestrator | 2025-05-14 02:37:03.036203 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-14 02:37:03.036210 | orchestrator | Wednesday 14 May 2025 02:27:51 +0000 (0:00:00.938) 0:04:08.784 ********* 2025-05-14 02:37:03.036216 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.036223 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.036229 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.036235 | orchestrator | 2025-05-14 02:37:03.036241 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-14 02:37:03.036247 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036254 | orchestrator | 2025-05-14 02:37:03.036263 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-14 02:37:03.036274 | orchestrator | Wednesday 14 May 2025 02:27:52 +0000 (0:00:00.906) 0:04:09.691 ********* 2025-05-14 02:37:03.036293 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.036304 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.036322 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.036332 | orchestrator | 2025-05-14 02:37:03.036341 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-14 02:37:03.036351 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036361 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.036371 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.036381 | orchestrator | 2025-05-14 02:37:03.036391 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-14 02:37:03.036400 | orchestrator | Wednesday 14 May 2025 02:27:53 +0000 (0:00:00.589) 0:04:10.281 ********* 2025-05-14 02:37:03.036406 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.036413 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.036419 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.036425 | orchestrator | 2025-05-14 02:37:03.036431 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-14 02:37:03.036438 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036444 | orchestrator | 2025-05-14 02:37:03.036450 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-14 02:37:03.036456 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:00.801) 0:04:11.082 ********* 2025-05-14 02:37:03.036462 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.036469 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.036475 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.036481 | orchestrator | 2025-05-14 02:37:03.036488 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-14 02:37:03.036494 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036500 | orchestrator | 2025-05-14 02:37:03.036506 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-14 02:37:03.036513 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:00.684) 0:04:11.767 ********* 2025-05-14 02:37:03.036519 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036525 | orchestrator | 2025-05-14 02:37:03.036609 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-14 02:37:03.036620 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:00.112) 0:04:11.880 ********* 2025-05-14 02:37:03.036628 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.036635 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.036642 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.036648 | orchestrator | 2025-05-14 02:37:03.036655 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-14 02:37:03.036661 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036667 | orchestrator | 2025-05-14 02:37:03.036674 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-14 02:37:03.036680 | orchestrator | Wednesday 14 May 2025 02:27:55 +0000 (0:00:00.670) 0:04:12.551 ********* 2025-05-14 02:37:03.036686 | orchestrator | 2025-05-14 02:37:03.036692 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-14 02:37:03.036698 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036705 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.036711 | orchestrator | 2025-05-14 02:37:03.036717 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-14 02:37:03.036723 | orchestrator | Wednesday 14 May 2025 02:27:56 +0000 (0:00:00.802) 0:04:13.353 ********* 2025-05-14 02:37:03.036730 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.036736 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.036743 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.036749 | orchestrator | 2025-05-14 02:37:03.036755 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-14 02:37:03.036762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.036768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.036785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.036791 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036798 | orchestrator | 2025-05-14 02:37:03.036804 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-14 02:37:03.036810 | orchestrator | Wednesday 14 May 2025 02:27:57 +0000 (0:00:01.326) 0:04:14.680 ********* 2025-05-14 02:37:03.036816 | orchestrator | 2025-05-14 02:37:03.036823 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-14 02:37:03.036829 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036835 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.036841 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.036847 | orchestrator | 2025-05-14 02:37:03.036853 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-14 02:37:03.036859 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.036865 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.036871 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.036877 | orchestrator | 2025-05-14 02:37:03.036883 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-14 02:37:03.036889 | orchestrator | Wednesday 14 May 2025 02:27:58 +0000 (0:00:01.289) 0:04:15.969 ********* 2025-05-14 02:37:03.036895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:37:03.036902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:37:03.036908 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:37:03.036914 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.036920 | orchestrator | 2025-05-14 02:37:03.036926 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-14 02:37:03.036932 | orchestrator | Wednesday 14 May 2025 02:27:59 +0000 (0:00:00.904) 0:04:16.873 ********* 2025-05-14 02:37:03.036938 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.036945 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.036951 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.036958 | orchestrator | 2025-05-14 02:37:03.036964 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-14 02:37:03.036970 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.036977 | orchestrator | 2025-05-14 02:37:03.036983 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-14 02:37:03.036989 | orchestrator | Wednesday 14 May 2025 02:28:00 +0000 (0:00:01.017) 0:04:17.890 ********* 2025-05-14 02:37:03.036996 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.037002 | orchestrator | 2025-05-14 02:37:03.037008 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-14 02:37:03.037015 | orchestrator | Wednesday 14 May 2025 02:28:01 +0000 (0:00:00.533) 0:04:18.424 ********* 2025-05-14 02:37:03.037021 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.037027 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.037034 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.037040 | orchestrator | 2025-05-14 02:37:03.037046 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-14 02:37:03.037052 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.037058 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.037065 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.037071 | orchestrator | 2025-05-14 02:37:03.037077 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-14 02:37:03.037084 | orchestrator | Wednesday 14 May 2025 02:28:02 +0000 (0:00:00.922) 0:04:19.346 ********* 2025-05-14 02:37:03.037090 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.037096 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.037102 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.037108 | orchestrator | 2025-05-14 02:37:03.037115 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:37:03.037121 | orchestrator | Wednesday 14 May 2025 02:28:03 +0000 (0:00:01.125) 0:04:20.471 ********* 2025-05-14 02:37:03.037132 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.037138 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.037144 | orchestrator | 2025-05-14 02:37:03.037151 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-14 02:37:03.037157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.037163 | orchestrator | 2025-05-14 02:37:03.037220 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:37:03.037230 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.037236 | orchestrator | 2025-05-14 02:37:03.037243 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-14 02:37:03.037249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.037255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.037262 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.037268 | orchestrator | 2025-05-14 02:37:03.037274 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-14 02:37:03.037280 | orchestrator | Wednesday 14 May 2025 02:28:04 +0000 (0:00:01.394) 0:04:21.865 ********* 2025-05-14 02:37:03.037287 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.037293 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.037299 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.037305 | orchestrator | 2025-05-14 02:37:03.037312 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-14 02:37:03.037318 | orchestrator | Wednesday 14 May 2025 02:28:05 +0000 (0:00:01.012) 0:04:22.878 ********* 2025-05-14 02:37:03.037324 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.037331 | orchestrator | 2025-05-14 02:37:03.037337 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-14 02:37:03.037343 | orchestrator | Wednesday 14 May 2025 02:28:06 +0000 (0:00:00.570) 0:04:23.448 ********* 2025-05-14 02:37:03.037349 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.037356 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.037362 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.037368 | orchestrator | 2025-05-14 02:37:03.037374 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-14 02:37:03.037388 | orchestrator | Wednesday 14 May 2025 02:28:07 +0000 (0:00:00.576) 0:04:24.025 ********* 2025-05-14 02:37:03.037394 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.037401 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.037407 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.037413 | orchestrator | 2025-05-14 02:37:03.037419 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-14 02:37:03.037425 | orchestrator | Wednesday 14 May 2025 02:28:08 +0000 (0:00:01.309) 0:04:25.335 ********* 2025-05-14 02:37:03.037431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.037437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.037443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.037450 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.037456 | orchestrator | 2025-05-14 02:37:03.037462 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-14 02:37:03.037468 | orchestrator | Wednesday 14 May 2025 02:28:08 +0000 (0:00:00.660) 0:04:25.995 ********* 2025-05-14 02:37:03.037474 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.037480 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.037487 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.037493 | orchestrator | 2025-05-14 02:37:03.037499 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-14 02:37:03.037505 | orchestrator | Wednesday 14 May 2025 02:28:09 +0000 (0:00:00.393) 0:04:26.388 ********* 2025-05-14 02:37:03.037511 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.037522 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.037528 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.037534 | orchestrator | 2025-05-14 02:37:03.037541 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-14 02:37:03.037547 | orchestrator | Wednesday 14 May 2025 02:28:09 +0000 (0:00:00.327) 0:04:26.716 ********* 2025-05-14 02:37:03.037553 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.037560 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.037566 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.037572 | orchestrator | 2025-05-14 02:37:03.037578 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-14 02:37:03.037585 | orchestrator | Wednesday 14 May 2025 02:28:10 +0000 (0:00:00.574) 0:04:27.291 ********* 2025-05-14 02:37:03.037591 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.037644 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.037653 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.037663 | orchestrator | 2025-05-14 02:37:03.037672 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:37:03.037679 | orchestrator | Wednesday 14 May 2025 02:28:10 +0000 (0:00:00.329) 0:04:27.621 ********* 2025-05-14 02:37:03.037685 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.037691 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.037698 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.037704 | orchestrator | 2025-05-14 02:37:03.037710 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-14 02:37:03.037716 | orchestrator | 2025-05-14 02:37:03.037723 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:37:03.037729 | orchestrator | Wednesday 14 May 2025 02:28:12 +0000 (0:00:02.109) 0:04:29.731 ********* 2025-05-14 02:37:03.037735 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.037742 | orchestrator | 2025-05-14 02:37:03.037748 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:37:03.037755 | orchestrator | Wednesday 14 May 2025 02:28:13 +0000 (0:00:01.007) 0:04:30.739 ********* 2025-05-14 02:37:03.037761 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.037767 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.037773 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.037779 | orchestrator | 2025-05-14 02:37:03.037786 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:37:03.037792 | orchestrator | Wednesday 14 May 2025 02:28:14 +0000 (0:00:00.748) 0:04:31.487 ********* 2025-05-14 02:37:03.037798 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.037804 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.037863 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.037872 | orchestrator | 2025-05-14 02:37:03.037878 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:37:03.037884 | orchestrator | Wednesday 14 May 2025 02:28:14 +0000 (0:00:00.311) 0:04:31.799 ********* 2025-05-14 02:37:03.037891 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.037897 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.037903 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.037909 | orchestrator | 2025-05-14 02:37:03.037915 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:37:03.037922 | orchestrator | Wednesday 14 May 2025 02:28:15 +0000 (0:00:00.495) 0:04:32.295 ********* 2025-05-14 02:37:03.037928 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.037934 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.037940 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.037947 | orchestrator | 2025-05-14 02:37:03.037953 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:37:03.037959 | orchestrator | Wednesday 14 May 2025 02:28:15 +0000 (0:00:00.297) 0:04:32.592 ********* 2025-05-14 02:37:03.037971 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.037978 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.037984 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.037990 | orchestrator | 2025-05-14 02:37:03.037997 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:37:03.038003 | orchestrator | Wednesday 14 May 2025 02:28:16 +0000 (0:00:00.883) 0:04:33.476 ********* 2025-05-14 02:37:03.038009 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038035 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038041 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038047 | orchestrator | 2025-05-14 02:37:03.038052 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:37:03.038058 | orchestrator | Wednesday 14 May 2025 02:28:17 +0000 (0:00:00.543) 0:04:34.020 ********* 2025-05-14 02:37:03.038067 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038073 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038078 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038083 | orchestrator | 2025-05-14 02:37:03.038089 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:37:03.038094 | orchestrator | Wednesday 14 May 2025 02:28:17 +0000 (0:00:00.299) 0:04:34.319 ********* 2025-05-14 02:37:03.038099 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038104 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038110 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038115 | orchestrator | 2025-05-14 02:37:03.038120 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:37:03.038126 | orchestrator | Wednesday 14 May 2025 02:28:17 +0000 (0:00:00.357) 0:04:34.677 ********* 2025-05-14 02:37:03.038131 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038137 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038142 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038147 | orchestrator | 2025-05-14 02:37:03.038152 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:37:03.038158 | orchestrator | Wednesday 14 May 2025 02:28:18 +0000 (0:00:00.414) 0:04:35.092 ********* 2025-05-14 02:37:03.038163 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038169 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038174 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038179 | orchestrator | 2025-05-14 02:37:03.038185 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:37:03.038190 | orchestrator | Wednesday 14 May 2025 02:28:18 +0000 (0:00:00.444) 0:04:35.537 ********* 2025-05-14 02:37:03.038196 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.038201 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.038207 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.038212 | orchestrator | 2025-05-14 02:37:03.038218 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:37:03.038223 | orchestrator | Wednesday 14 May 2025 02:28:19 +0000 (0:00:01.304) 0:04:36.841 ********* 2025-05-14 02:37:03.038229 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038234 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038240 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038245 | orchestrator | 2025-05-14 02:37:03.038251 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:37:03.038256 | orchestrator | Wednesday 14 May 2025 02:28:20 +0000 (0:00:00.345) 0:04:37.186 ********* 2025-05-14 02:37:03.038262 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.038267 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.038273 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.038278 | orchestrator | 2025-05-14 02:37:03.038284 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:37:03.038289 | orchestrator | Wednesday 14 May 2025 02:28:20 +0000 (0:00:00.394) 0:04:37.580 ********* 2025-05-14 02:37:03.038295 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038300 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038310 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038315 | orchestrator | 2025-05-14 02:37:03.038321 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:37:03.038326 | orchestrator | Wednesday 14 May 2025 02:28:21 +0000 (0:00:00.656) 0:04:38.237 ********* 2025-05-14 02:37:03.038332 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038337 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038343 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038348 | orchestrator | 2025-05-14 02:37:03.038354 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:37:03.038359 | orchestrator | Wednesday 14 May 2025 02:28:21 +0000 (0:00:00.389) 0:04:38.626 ********* 2025-05-14 02:37:03.038365 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038370 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038376 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038381 | orchestrator | 2025-05-14 02:37:03.038387 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:37:03.038392 | orchestrator | Wednesday 14 May 2025 02:28:22 +0000 (0:00:00.413) 0:04:39.040 ********* 2025-05-14 02:37:03.038398 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038403 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038454 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038462 | orchestrator | 2025-05-14 02:37:03.038467 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:37:03.038473 | orchestrator | Wednesday 14 May 2025 02:28:22 +0000 (0:00:00.467) 0:04:39.507 ********* 2025-05-14 02:37:03.038478 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038484 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038490 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038495 | orchestrator | 2025-05-14 02:37:03.038501 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:37:03.038506 | orchestrator | Wednesday 14 May 2025 02:28:23 +0000 (0:00:00.802) 0:04:40.310 ********* 2025-05-14 02:37:03.038512 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.038517 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.038523 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.038528 | orchestrator | 2025-05-14 02:37:03.038534 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:37:03.038540 | orchestrator | Wednesday 14 May 2025 02:28:23 +0000 (0:00:00.419) 0:04:40.730 ********* 2025-05-14 02:37:03.038545 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.038551 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.038557 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.038562 | orchestrator | 2025-05-14 02:37:03.038567 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:37:03.038573 | orchestrator | Wednesday 14 May 2025 02:28:24 +0000 (0:00:00.528) 0:04:41.259 ********* 2025-05-14 02:37:03.038578 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038583 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038589 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038612 | orchestrator | 2025-05-14 02:37:03.038620 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:37:03.038628 | orchestrator | Wednesday 14 May 2025 02:28:24 +0000 (0:00:00.576) 0:04:41.835 ********* 2025-05-14 02:37:03.038644 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038657 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038665 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038673 | orchestrator | 2025-05-14 02:37:03.038682 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:37:03.038690 | orchestrator | Wednesday 14 May 2025 02:28:25 +0000 (0:00:00.760) 0:04:42.596 ********* 2025-05-14 02:37:03.038698 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038705 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038713 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038729 | orchestrator | 2025-05-14 02:37:03.038737 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:37:03.038745 | orchestrator | Wednesday 14 May 2025 02:28:25 +0000 (0:00:00.388) 0:04:42.984 ********* 2025-05-14 02:37:03.038753 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038761 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038769 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038777 | orchestrator | 2025-05-14 02:37:03.038786 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:37:03.038795 | orchestrator | Wednesday 14 May 2025 02:28:26 +0000 (0:00:00.389) 0:04:43.375 ********* 2025-05-14 02:37:03.038805 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038814 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038822 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038830 | orchestrator | 2025-05-14 02:37:03.038836 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:37:03.038842 | orchestrator | Wednesday 14 May 2025 02:28:26 +0000 (0:00:00.381) 0:04:43.756 ********* 2025-05-14 02:37:03.038847 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038853 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038858 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038863 | orchestrator | 2025-05-14 02:37:03.038869 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:37:03.038874 | orchestrator | Wednesday 14 May 2025 02:28:27 +0000 (0:00:00.607) 0:04:44.363 ********* 2025-05-14 02:37:03.038880 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038885 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038891 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038897 | orchestrator | 2025-05-14 02:37:03.038902 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:37:03.038908 | orchestrator | Wednesday 14 May 2025 02:28:27 +0000 (0:00:00.336) 0:04:44.700 ********* 2025-05-14 02:37:03.038913 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038919 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038924 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038930 | orchestrator | 2025-05-14 02:37:03.038935 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:37:03.038941 | orchestrator | Wednesday 14 May 2025 02:28:28 +0000 (0:00:00.346) 0:04:45.046 ********* 2025-05-14 02:37:03.038947 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038952 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038958 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038963 | orchestrator | 2025-05-14 02:37:03.038969 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:37:03.038975 | orchestrator | Wednesday 14 May 2025 02:28:28 +0000 (0:00:00.345) 0:04:45.391 ********* 2025-05-14 02:37:03.038980 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.038986 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.038991 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.038996 | orchestrator | 2025-05-14 02:37:03.039002 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:37:03.039007 | orchestrator | Wednesday 14 May 2025 02:28:28 +0000 (0:00:00.601) 0:04:45.993 ********* 2025-05-14 02:37:03.039013 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039018 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039024 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039029 | orchestrator | 2025-05-14 02:37:03.039035 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:37:03.039103 | orchestrator | Wednesday 14 May 2025 02:28:29 +0000 (0:00:00.392) 0:04:46.386 ********* 2025-05-14 02:37:03.039112 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039117 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039130 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039135 | orchestrator | 2025-05-14 02:37:03.039141 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:37:03.039146 | orchestrator | Wednesday 14 May 2025 02:28:29 +0000 (0:00:00.418) 0:04:46.805 ********* 2025-05-14 02:37:03.039152 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:37:03.039157 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:37:03.039163 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039168 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:37:03.039174 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:37:03.039179 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039185 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:37:03.039190 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:37:03.039196 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039201 | orchestrator | 2025-05-14 02:37:03.039206 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:37:03.039212 | orchestrator | Wednesday 14 May 2025 02:28:30 +0000 (0:00:00.405) 0:04:47.210 ********* 2025-05-14 02:37:03.039217 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 02:37:03.039223 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 02:37:03.039228 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039234 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 02:37:03.039239 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 02:37:03.039245 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039255 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 02:37:03.039260 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 02:37:03.039265 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039271 | orchestrator | 2025-05-14 02:37:03.039276 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:37:03.039282 | orchestrator | Wednesday 14 May 2025 02:28:30 +0000 (0:00:00.698) 0:04:47.908 ********* 2025-05-14 02:37:03.039287 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039293 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039298 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039303 | orchestrator | 2025-05-14 02:37:03.039309 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:37:03.039314 | orchestrator | Wednesday 14 May 2025 02:28:31 +0000 (0:00:00.446) 0:04:48.355 ********* 2025-05-14 02:37:03.039319 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039325 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039330 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039336 | orchestrator | 2025-05-14 02:37:03.039341 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:37:03.039347 | orchestrator | Wednesday 14 May 2025 02:28:31 +0000 (0:00:00.389) 0:04:48.745 ********* 2025-05-14 02:37:03.039353 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039358 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039364 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039369 | orchestrator | 2025-05-14 02:37:03.039374 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:37:03.039380 | orchestrator | Wednesday 14 May 2025 02:28:32 +0000 (0:00:00.389) 0:04:49.134 ********* 2025-05-14 02:37:03.039385 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039391 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039396 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039402 | orchestrator | 2025-05-14 02:37:03.039407 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:37:03.039413 | orchestrator | Wednesday 14 May 2025 02:28:32 +0000 (0:00:00.642) 0:04:49.776 ********* 2025-05-14 02:37:03.039423 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039428 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039434 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039439 | orchestrator | 2025-05-14 02:37:03.039445 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:37:03.039450 | orchestrator | Wednesday 14 May 2025 02:28:33 +0000 (0:00:00.371) 0:04:50.148 ********* 2025-05-14 02:37:03.039456 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039461 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039466 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039472 | orchestrator | 2025-05-14 02:37:03.039477 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:37:03.039483 | orchestrator | Wednesday 14 May 2025 02:28:33 +0000 (0:00:00.341) 0:04:50.489 ********* 2025-05-14 02:37:03.039489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.039494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.039500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.039505 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039510 | orchestrator | 2025-05-14 02:37:03.039516 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:37:03.039521 | orchestrator | Wednesday 14 May 2025 02:28:33 +0000 (0:00:00.430) 0:04:50.919 ********* 2025-05-14 02:37:03.039527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.039532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.039538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.039543 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039548 | orchestrator | 2025-05-14 02:37:03.039554 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:37:03.039559 | orchestrator | Wednesday 14 May 2025 02:28:34 +0000 (0:00:00.432) 0:04:51.352 ********* 2025-05-14 02:37:03.039619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.039627 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.039633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.039639 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039644 | orchestrator | 2025-05-14 02:37:03.039650 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.039655 | orchestrator | Wednesday 14 May 2025 02:28:35 +0000 (0:00:00.692) 0:04:52.045 ********* 2025-05-14 02:37:03.039661 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039666 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039672 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039677 | orchestrator | 2025-05-14 02:37:03.039683 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:37:03.039689 | orchestrator | Wednesday 14 May 2025 02:28:35 +0000 (0:00:00.640) 0:04:52.686 ********* 2025-05-14 02:37:03.039694 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:37:03.039700 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039705 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:37:03.039711 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039716 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:37:03.039721 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039727 | orchestrator | 2025-05-14 02:37:03.039732 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:37:03.039738 | orchestrator | Wednesday 14 May 2025 02:28:36 +0000 (0:00:00.524) 0:04:53.210 ********* 2025-05-14 02:37:03.039743 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039748 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039754 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039764 | orchestrator | 2025-05-14 02:37:03.039769 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.039778 | orchestrator | Wednesday 14 May 2025 02:28:36 +0000 (0:00:00.397) 0:04:53.607 ********* 2025-05-14 02:37:03.039784 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039789 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039795 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039800 | orchestrator | 2025-05-14 02:37:03.039805 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:37:03.039811 | orchestrator | Wednesday 14 May 2025 02:28:37 +0000 (0:00:00.397) 0:04:54.005 ********* 2025-05-14 02:37:03.039816 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:37:03.039822 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039827 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:37:03.039832 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039838 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:37:03.039843 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039848 | orchestrator | 2025-05-14 02:37:03.039854 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:37:03.039859 | orchestrator | Wednesday 14 May 2025 02:28:37 +0000 (0:00:00.902) 0:04:54.908 ********* 2025-05-14 02:37:03.039865 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039870 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039875 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039881 | orchestrator | 2025-05-14 02:37:03.039887 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:37:03.039892 | orchestrator | Wednesday 14 May 2025 02:28:38 +0000 (0:00:00.393) 0:04:55.302 ********* 2025-05-14 02:37:03.039898 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.039903 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.039909 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.039914 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039920 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:37:03.039925 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:37:03.039931 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:37:03.039936 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039942 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:37:03.039947 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:37:03.039953 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:37:03.039958 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039963 | orchestrator | 2025-05-14 02:37:03.039969 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:37:03.039974 | orchestrator | Wednesday 14 May 2025 02:28:39 +0000 (0:00:00.736) 0:04:56.038 ********* 2025-05-14 02:37:03.039980 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.039986 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.039991 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.039997 | orchestrator | 2025-05-14 02:37:03.040002 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:37:03.040008 | orchestrator | Wednesday 14 May 2025 02:28:39 +0000 (0:00:00.567) 0:04:56.606 ********* 2025-05-14 02:37:03.040013 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.040019 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.040024 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.040029 | orchestrator | 2025-05-14 02:37:03.040035 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:37:03.040041 | orchestrator | Wednesday 14 May 2025 02:28:40 +0000 (0:00:00.575) 0:04:57.181 ********* 2025-05-14 02:37:03.040046 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.040058 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.040063 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.040069 | orchestrator | 2025-05-14 02:37:03.040074 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:37:03.040080 | orchestrator | Wednesday 14 May 2025 02:28:40 +0000 (0:00:00.774) 0:04:57.956 ********* 2025-05-14 02:37:03.040085 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.040091 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.040116 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.040122 | orchestrator | 2025-05-14 02:37:03.040128 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-14 02:37:03.040133 | orchestrator | Wednesday 14 May 2025 02:28:41 +0000 (0:00:00.503) 0:04:58.459 ********* 2025-05-14 02:37:03.040139 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.040145 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.040150 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.040156 | orchestrator | 2025-05-14 02:37:03.040161 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-14 02:37:03.040166 | orchestrator | Wednesday 14 May 2025 02:28:42 +0000 (0:00:00.548) 0:04:59.008 ********* 2025-05-14 02:37:03.040172 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.040178 | orchestrator | 2025-05-14 02:37:03.040183 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-14 02:37:03.040189 | orchestrator | Wednesday 14 May 2025 02:28:42 +0000 (0:00:00.561) 0:04:59.570 ********* 2025-05-14 02:37:03.040194 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.040200 | orchestrator | 2025-05-14 02:37:03.040205 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-14 02:37:03.040211 | orchestrator | Wednesday 14 May 2025 02:28:42 +0000 (0:00:00.124) 0:04:59.694 ********* 2025-05-14 02:37:03.040216 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-14 02:37:03.040222 | orchestrator | 2025-05-14 02:37:03.040227 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-14 02:37:03.040232 | orchestrator | Wednesday 14 May 2025 02:28:43 +0000 (0:00:00.687) 0:05:00.382 ********* 2025-05-14 02:37:03.040238 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.040245 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.040251 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.040257 | orchestrator | 2025-05-14 02:37:03.040268 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-14 02:37:03.040274 | orchestrator | Wednesday 14 May 2025 02:28:43 +0000 (0:00:00.483) 0:05:00.866 ********* 2025-05-14 02:37:03.040281 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.040287 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.040294 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.040300 | orchestrator | 2025-05-14 02:37:03.040307 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-14 02:37:03.040313 | orchestrator | Wednesday 14 May 2025 02:28:44 +0000 (0:00:00.328) 0:05:01.194 ********* 2025-05-14 02:37:03.040320 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.040326 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.040333 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.040339 | orchestrator | 2025-05-14 02:37:03.040345 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-14 02:37:03.040352 | orchestrator | Wednesday 14 May 2025 02:28:45 +0000 (0:00:01.301) 0:05:02.496 ********* 2025-05-14 02:37:03.040359 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.040365 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.040372 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.040378 | orchestrator | 2025-05-14 02:37:03.040385 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-14 02:37:03.040391 | orchestrator | Wednesday 14 May 2025 02:28:46 +0000 (0:00:00.941) 0:05:03.437 ********* 2025-05-14 02:37:03.040402 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.040409 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.040415 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.040422 | orchestrator | 2025-05-14 02:37:03.040428 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-14 02:37:03.040435 | orchestrator | Wednesday 14 May 2025 02:28:47 +0000 (0:00:00.653) 0:05:04.090 ********* 2025-05-14 02:37:03.040441 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.040448 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.040454 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.040461 | orchestrator | 2025-05-14 02:37:03.040468 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-14 02:37:03.040475 | orchestrator | Wednesday 14 May 2025 02:28:47 +0000 (0:00:00.609) 0:05:04.700 ********* 2025-05-14 02:37:03.040481 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.040487 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.040494 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.040500 | orchestrator | 2025-05-14 02:37:03.040507 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-14 02:37:03.040513 | orchestrator | Wednesday 14 May 2025 02:28:47 +0000 (0:00:00.275) 0:05:04.975 ********* 2025-05-14 02:37:03.040520 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.040526 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.040533 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.040540 | orchestrator | 2025-05-14 02:37:03.040546 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-14 02:37:03.040553 | orchestrator | Wednesday 14 May 2025 02:28:48 +0000 (0:00:00.484) 0:05:05.460 ********* 2025-05-14 02:37:03.040559 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.040565 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.040572 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.040578 | orchestrator | 2025-05-14 02:37:03.040585 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-14 02:37:03.040609 | orchestrator | Wednesday 14 May 2025 02:28:48 +0000 (0:00:00.302) 0:05:05.762 ********* 2025-05-14 02:37:03.040616 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.040621 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.040627 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.040632 | orchestrator | 2025-05-14 02:37:03.040638 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-14 02:37:03.040643 | orchestrator | Wednesday 14 May 2025 02:28:49 +0000 (0:00:00.451) 0:05:06.214 ********* 2025-05-14 02:37:03.040649 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.040654 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.040659 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.040665 | orchestrator | 2025-05-14 02:37:03.040670 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-14 02:37:03.040695 | orchestrator | Wednesday 14 May 2025 02:28:50 +0000 (0:00:01.312) 0:05:07.526 ********* 2025-05-14 02:37:03.040701 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.040706 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.040712 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.040717 | orchestrator | 2025-05-14 02:37:03.040723 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-14 02:37:03.040728 | orchestrator | Wednesday 14 May 2025 02:28:51 +0000 (0:00:00.652) 0:05:08.178 ********* 2025-05-14 02:37:03.040734 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.040739 | orchestrator | 2025-05-14 02:37:03.040745 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-14 02:37:03.040750 | orchestrator | Wednesday 14 May 2025 02:28:51 +0000 (0:00:00.582) 0:05:08.761 ********* 2025-05-14 02:37:03.040756 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.040761 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.040766 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.040779 | orchestrator | 2025-05-14 02:37:03.040784 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-14 02:37:03.040790 | orchestrator | Wednesday 14 May 2025 02:28:52 +0000 (0:00:00.385) 0:05:09.147 ********* 2025-05-14 02:37:03.040795 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.040801 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.040806 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.040811 | orchestrator | 2025-05-14 02:37:03.040817 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-14 02:37:03.040822 | orchestrator | Wednesday 14 May 2025 02:28:52 +0000 (0:00:00.654) 0:05:09.801 ********* 2025-05-14 02:37:03.040828 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.040833 | orchestrator | 2025-05-14 02:37:03.040842 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-14 02:37:03.040847 | orchestrator | Wednesday 14 May 2025 02:28:53 +0000 (0:00:00.601) 0:05:10.403 ********* 2025-05-14 02:37:03.040853 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.040858 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.040864 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.040869 | orchestrator | 2025-05-14 02:37:03.040875 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-14 02:37:03.040880 | orchestrator | Wednesday 14 May 2025 02:28:54 +0000 (0:00:01.182) 0:05:11.586 ********* 2025-05-14 02:37:03.040885 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.040891 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.040896 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.040902 | orchestrator | 2025-05-14 02:37:03.040907 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-14 02:37:03.040912 | orchestrator | Wednesday 14 May 2025 02:28:55 +0000 (0:00:01.376) 0:05:12.962 ********* 2025-05-14 02:37:03.040918 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.040923 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.040929 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.040934 | orchestrator | 2025-05-14 02:37:03.040940 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-14 02:37:03.040945 | orchestrator | Wednesday 14 May 2025 02:28:57 +0000 (0:00:01.639) 0:05:14.602 ********* 2025-05-14 02:37:03.040951 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.040956 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.040961 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.040967 | orchestrator | 2025-05-14 02:37:03.040972 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-14 02:37:03.040978 | orchestrator | Wednesday 14 May 2025 02:28:59 +0000 (0:00:02.230) 0:05:16.832 ********* 2025-05-14 02:37:03.040983 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.040989 | orchestrator | 2025-05-14 02:37:03.040994 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-14 02:37:03.040999 | orchestrator | Wednesday 14 May 2025 02:29:00 +0000 (0:00:00.686) 0:05:17.519 ********* 2025-05-14 02:37:03.041005 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-14 02:37:03.041010 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.041016 | orchestrator | 2025-05-14 02:37:03.041021 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-14 02:37:03.041026 | orchestrator | Wednesday 14 May 2025 02:29:21 +0000 (0:00:21.458) 0:05:38.977 ********* 2025-05-14 02:37:03.041032 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.041037 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.041043 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.041048 | orchestrator | 2025-05-14 02:37:03.041054 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-14 02:37:03.041063 | orchestrator | Wednesday 14 May 2025 02:29:29 +0000 (0:00:07.725) 0:05:46.702 ********* 2025-05-14 02:37:03.041068 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041074 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041079 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041084 | orchestrator | 2025-05-14 02:37:03.041090 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:37:03.041095 | orchestrator | Wednesday 14 May 2025 02:29:30 +0000 (0:00:01.268) 0:05:47.971 ********* 2025-05-14 02:37:03.041101 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.041106 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.041111 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.041117 | orchestrator | 2025-05-14 02:37:03.041122 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-14 02:37:03.041128 | orchestrator | Wednesday 14 May 2025 02:29:31 +0000 (0:00:00.784) 0:05:48.755 ********* 2025-05-14 02:37:03.041133 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.041138 | orchestrator | 2025-05-14 02:37:03.041144 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-14 02:37:03.041167 | orchestrator | Wednesday 14 May 2025 02:29:32 +0000 (0:00:00.867) 0:05:49.622 ********* 2025-05-14 02:37:03.041174 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.041179 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.041185 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.041190 | orchestrator | 2025-05-14 02:37:03.041195 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-14 02:37:03.041201 | orchestrator | Wednesday 14 May 2025 02:29:32 +0000 (0:00:00.344) 0:05:49.967 ********* 2025-05-14 02:37:03.041206 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.041212 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.041217 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.041223 | orchestrator | 2025-05-14 02:37:03.041228 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-14 02:37:03.041234 | orchestrator | Wednesday 14 May 2025 02:29:34 +0000 (0:00:01.260) 0:05:51.228 ********* 2025-05-14 02:37:03.041239 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:37:03.041245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:37:03.041250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:37:03.041256 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041261 | orchestrator | 2025-05-14 02:37:03.041267 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-14 02:37:03.041272 | orchestrator | Wednesday 14 May 2025 02:29:35 +0000 (0:00:01.059) 0:05:52.287 ********* 2025-05-14 02:37:03.041277 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.041283 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.041288 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.041293 | orchestrator | 2025-05-14 02:37:03.041299 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:37:03.041304 | orchestrator | Wednesday 14 May 2025 02:29:35 +0000 (0:00:00.342) 0:05:52.630 ********* 2025-05-14 02:37:03.041313 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.041318 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.041324 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.041329 | orchestrator | 2025-05-14 02:37:03.041335 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-14 02:37:03.041340 | orchestrator | 2025-05-14 02:37:03.041345 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:37:03.041351 | orchestrator | Wednesday 14 May 2025 02:29:37 +0000 (0:00:02.214) 0:05:54.844 ********* 2025-05-14 02:37:03.041356 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.041362 | orchestrator | 2025-05-14 02:37:03.041371 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:37:03.041376 | orchestrator | Wednesday 14 May 2025 02:29:38 +0000 (0:00:00.835) 0:05:55.680 ********* 2025-05-14 02:37:03.041382 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.041387 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.041393 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.041398 | orchestrator | 2025-05-14 02:37:03.041403 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:37:03.041409 | orchestrator | Wednesday 14 May 2025 02:29:39 +0000 (0:00:00.779) 0:05:56.460 ********* 2025-05-14 02:37:03.041414 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041420 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041425 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041431 | orchestrator | 2025-05-14 02:37:03.041436 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:37:03.041442 | orchestrator | Wednesday 14 May 2025 02:29:39 +0000 (0:00:00.350) 0:05:56.810 ********* 2025-05-14 02:37:03.041447 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041452 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041458 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041463 | orchestrator | 2025-05-14 02:37:03.041469 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:37:03.041474 | orchestrator | Wednesday 14 May 2025 02:29:40 +0000 (0:00:00.631) 0:05:57.442 ********* 2025-05-14 02:37:03.041480 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041485 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041490 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041496 | orchestrator | 2025-05-14 02:37:03.041501 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:37:03.041507 | orchestrator | Wednesday 14 May 2025 02:29:40 +0000 (0:00:00.350) 0:05:57.793 ********* 2025-05-14 02:37:03.041512 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.041518 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.041523 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.041528 | orchestrator | 2025-05-14 02:37:03.041534 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:37:03.041539 | orchestrator | Wednesday 14 May 2025 02:29:41 +0000 (0:00:00.749) 0:05:58.543 ********* 2025-05-14 02:37:03.041544 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041550 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041555 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041561 | orchestrator | 2025-05-14 02:37:03.041566 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:37:03.041571 | orchestrator | Wednesday 14 May 2025 02:29:41 +0000 (0:00:00.351) 0:05:58.894 ********* 2025-05-14 02:37:03.041577 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041582 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041588 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041632 | orchestrator | 2025-05-14 02:37:03.041638 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:37:03.041644 | orchestrator | Wednesday 14 May 2025 02:29:42 +0000 (0:00:00.629) 0:05:59.523 ********* 2025-05-14 02:37:03.041649 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041655 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041660 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041666 | orchestrator | 2025-05-14 02:37:03.041671 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:37:03.041696 | orchestrator | Wednesday 14 May 2025 02:29:42 +0000 (0:00:00.320) 0:05:59.844 ********* 2025-05-14 02:37:03.041703 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041709 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041714 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041720 | orchestrator | 2025-05-14 02:37:03.041725 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:37:03.041735 | orchestrator | Wednesday 14 May 2025 02:29:43 +0000 (0:00:00.364) 0:06:00.208 ********* 2025-05-14 02:37:03.041741 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041746 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041751 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041757 | orchestrator | 2025-05-14 02:37:03.041763 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:37:03.041768 | orchestrator | Wednesday 14 May 2025 02:29:43 +0000 (0:00:00.395) 0:06:00.603 ********* 2025-05-14 02:37:03.041774 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.041779 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.041785 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.041790 | orchestrator | 2025-05-14 02:37:03.041795 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:37:03.041800 | orchestrator | Wednesday 14 May 2025 02:29:44 +0000 (0:00:01.073) 0:06:01.677 ********* 2025-05-14 02:37:03.041805 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041810 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041815 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041820 | orchestrator | 2025-05-14 02:37:03.041824 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:37:03.041829 | orchestrator | Wednesday 14 May 2025 02:29:45 +0000 (0:00:00.328) 0:06:02.005 ********* 2025-05-14 02:37:03.041834 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.041839 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.041844 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.041849 | orchestrator | 2025-05-14 02:37:03.041853 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:37:03.041861 | orchestrator | Wednesday 14 May 2025 02:29:45 +0000 (0:00:00.413) 0:06:02.419 ********* 2025-05-14 02:37:03.041866 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041871 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041876 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041881 | orchestrator | 2025-05-14 02:37:03.041885 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:37:03.041890 | orchestrator | Wednesday 14 May 2025 02:29:45 +0000 (0:00:00.340) 0:06:02.759 ********* 2025-05-14 02:37:03.041895 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041900 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041905 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041910 | orchestrator | 2025-05-14 02:37:03.041914 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:37:03.041919 | orchestrator | Wednesday 14 May 2025 02:29:46 +0000 (0:00:00.694) 0:06:03.454 ********* 2025-05-14 02:37:03.041924 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041929 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041934 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041939 | orchestrator | 2025-05-14 02:37:03.041944 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:37:03.041949 | orchestrator | Wednesday 14 May 2025 02:29:46 +0000 (0:00:00.342) 0:06:03.796 ********* 2025-05-14 02:37:03.041954 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041959 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041964 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041968 | orchestrator | 2025-05-14 02:37:03.041973 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:37:03.041978 | orchestrator | Wednesday 14 May 2025 02:29:47 +0000 (0:00:00.343) 0:06:04.140 ********* 2025-05-14 02:37:03.041983 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.041988 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.041993 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.041998 | orchestrator | 2025-05-14 02:37:03.042002 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:37:03.042007 | orchestrator | Wednesday 14 May 2025 02:29:47 +0000 (0:00:00.312) 0:06:04.452 ********* 2025-05-14 02:37:03.042036 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.042043 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.042048 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.042053 | orchestrator | 2025-05-14 02:37:03.042058 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:37:03.042062 | orchestrator | Wednesday 14 May 2025 02:29:48 +0000 (0:00:00.675) 0:06:05.127 ********* 2025-05-14 02:37:03.042067 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.042072 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.042077 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.042082 | orchestrator | 2025-05-14 02:37:03.042086 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:37:03.042091 | orchestrator | Wednesday 14 May 2025 02:29:48 +0000 (0:00:00.383) 0:06:05.511 ********* 2025-05-14 02:37:03.042096 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042101 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042106 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042111 | orchestrator | 2025-05-14 02:37:03.042115 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:37:03.042120 | orchestrator | Wednesday 14 May 2025 02:29:48 +0000 (0:00:00.332) 0:06:05.843 ********* 2025-05-14 02:37:03.042125 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042130 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042135 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042139 | orchestrator | 2025-05-14 02:37:03.042144 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:37:03.042149 | orchestrator | Wednesday 14 May 2025 02:29:49 +0000 (0:00:00.333) 0:06:06.176 ********* 2025-05-14 02:37:03.042154 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042159 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042164 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042169 | orchestrator | 2025-05-14 02:37:03.042174 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:37:03.042179 | orchestrator | Wednesday 14 May 2025 02:29:49 +0000 (0:00:00.669) 0:06:06.846 ********* 2025-05-14 02:37:03.042200 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042206 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042211 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042216 | orchestrator | 2025-05-14 02:37:03.042221 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:37:03.042225 | orchestrator | Wednesday 14 May 2025 02:29:50 +0000 (0:00:00.350) 0:06:07.196 ********* 2025-05-14 02:37:03.042230 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042235 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042240 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042245 | orchestrator | 2025-05-14 02:37:03.042250 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:37:03.042254 | orchestrator | Wednesday 14 May 2025 02:29:50 +0000 (0:00:00.406) 0:06:07.603 ********* 2025-05-14 02:37:03.042259 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042264 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042269 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042274 | orchestrator | 2025-05-14 02:37:03.042278 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:37:03.042283 | orchestrator | Wednesday 14 May 2025 02:29:50 +0000 (0:00:00.321) 0:06:07.924 ********* 2025-05-14 02:37:03.042288 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042293 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042297 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042302 | orchestrator | 2025-05-14 02:37:03.042307 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:37:03.042312 | orchestrator | Wednesday 14 May 2025 02:29:51 +0000 (0:00:00.647) 0:06:08.571 ********* 2025-05-14 02:37:03.042320 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042325 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042330 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042335 | orchestrator | 2025-05-14 02:37:03.042343 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:37:03.042348 | orchestrator | Wednesday 14 May 2025 02:29:51 +0000 (0:00:00.366) 0:06:08.938 ********* 2025-05-14 02:37:03.042353 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042357 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042362 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042367 | orchestrator | 2025-05-14 02:37:03.042372 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:37:03.042377 | orchestrator | Wednesday 14 May 2025 02:29:52 +0000 (0:00:00.355) 0:06:09.293 ********* 2025-05-14 02:37:03.042382 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042386 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042391 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042396 | orchestrator | 2025-05-14 02:37:03.042401 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:37:03.042406 | orchestrator | Wednesday 14 May 2025 02:29:52 +0000 (0:00:00.366) 0:06:09.660 ********* 2025-05-14 02:37:03.042411 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042415 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042420 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042425 | orchestrator | 2025-05-14 02:37:03.042430 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:37:03.042435 | orchestrator | Wednesday 14 May 2025 02:29:53 +0000 (0:00:00.664) 0:06:10.325 ********* 2025-05-14 02:37:03.042439 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042444 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042449 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042454 | orchestrator | 2025-05-14 02:37:03.042458 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:37:03.042463 | orchestrator | Wednesday 14 May 2025 02:29:53 +0000 (0:00:00.384) 0:06:10.709 ********* 2025-05-14 02:37:03.042468 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:37:03.042473 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:37:03.042478 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042483 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:37:03.042487 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:37:03.042492 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042497 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:37:03.042502 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:37:03.042507 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042511 | orchestrator | 2025-05-14 02:37:03.042516 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:37:03.042521 | orchestrator | Wednesday 14 May 2025 02:29:54 +0000 (0:00:00.459) 0:06:11.168 ********* 2025-05-14 02:37:03.042526 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 02:37:03.042530 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 02:37:03.042535 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042540 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 02:37:03.042545 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 02:37:03.042550 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042554 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 02:37:03.042559 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 02:37:03.042564 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042569 | orchestrator | 2025-05-14 02:37:03.042577 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:37:03.042582 | orchestrator | Wednesday 14 May 2025 02:29:54 +0000 (0:00:00.563) 0:06:11.732 ********* 2025-05-14 02:37:03.042587 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042606 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042612 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042617 | orchestrator | 2025-05-14 02:37:03.042622 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:37:03.042642 | orchestrator | Wednesday 14 May 2025 02:29:55 +0000 (0:00:00.321) 0:06:12.053 ********* 2025-05-14 02:37:03.042648 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042653 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042658 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042663 | orchestrator | 2025-05-14 02:37:03.042668 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:37:03.042673 | orchestrator | Wednesday 14 May 2025 02:29:55 +0000 (0:00:00.311) 0:06:12.365 ********* 2025-05-14 02:37:03.042678 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042683 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042688 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042692 | orchestrator | 2025-05-14 02:37:03.042697 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:37:03.042702 | orchestrator | Wednesday 14 May 2025 02:29:55 +0000 (0:00:00.300) 0:06:12.666 ********* 2025-05-14 02:37:03.042707 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042712 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042716 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042721 | orchestrator | 2025-05-14 02:37:03.042726 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:37:03.042731 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:00.466) 0:06:13.132 ********* 2025-05-14 02:37:03.042736 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042741 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042746 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042751 | orchestrator | 2025-05-14 02:37:03.042755 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:37:03.042760 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:00.288) 0:06:13.420 ********* 2025-05-14 02:37:03.042765 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042770 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042778 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042783 | orchestrator | 2025-05-14 02:37:03.042787 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:37:03.042792 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:00.317) 0:06:13.738 ********* 2025-05-14 02:37:03.042797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.042802 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.042807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.042812 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042817 | orchestrator | 2025-05-14 02:37:03.042821 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:37:03.042826 | orchestrator | Wednesday 14 May 2025 02:29:57 +0000 (0:00:00.370) 0:06:14.109 ********* 2025-05-14 02:37:03.042831 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.042836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.042841 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.042846 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042851 | orchestrator | 2025-05-14 02:37:03.042856 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:37:03.042861 | orchestrator | Wednesday 14 May 2025 02:29:57 +0000 (0:00:00.411) 0:06:14.520 ********* 2025-05-14 02:37:03.042869 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.042874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.042879 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.042884 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042889 | orchestrator | 2025-05-14 02:37:03.042894 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.042899 | orchestrator | Wednesday 14 May 2025 02:29:57 +0000 (0:00:00.383) 0:06:14.903 ********* 2025-05-14 02:37:03.042903 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042908 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042913 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042918 | orchestrator | 2025-05-14 02:37:03.042923 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:37:03.042928 | orchestrator | Wednesday 14 May 2025 02:29:58 +0000 (0:00:00.563) 0:06:15.467 ********* 2025-05-14 02:37:03.042933 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:37:03.042937 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042942 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:37:03.042947 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042952 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:37:03.042957 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042962 | orchestrator | 2025-05-14 02:37:03.042966 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:37:03.042971 | orchestrator | Wednesday 14 May 2025 02:29:58 +0000 (0:00:00.497) 0:06:15.965 ********* 2025-05-14 02:37:03.042976 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.042981 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.042986 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.042991 | orchestrator | 2025-05-14 02:37:03.042996 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.043001 | orchestrator | Wednesday 14 May 2025 02:29:59 +0000 (0:00:00.347) 0:06:16.312 ********* 2025-05-14 02:37:03.043005 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043010 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043015 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043020 | orchestrator | 2025-05-14 02:37:03.043025 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:37:03.043030 | orchestrator | Wednesday 14 May 2025 02:29:59 +0000 (0:00:00.352) 0:06:16.665 ********* 2025-05-14 02:37:03.043034 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:37:03.043040 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043044 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:37:03.043049 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043069 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:37:03.043075 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043080 | orchestrator | 2025-05-14 02:37:03.043084 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:37:03.043089 | orchestrator | Wednesday 14 May 2025 02:30:00 +0000 (0:00:00.782) 0:06:17.448 ********* 2025-05-14 02:37:03.043094 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043099 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043104 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043109 | orchestrator | 2025-05-14 02:37:03.043114 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:37:03.043119 | orchestrator | Wednesday 14 May 2025 02:30:00 +0000 (0:00:00.362) 0:06:17.810 ********* 2025-05-14 02:37:03.043123 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.043128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.043133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.043142 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043147 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:37:03.043151 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:37:03.043156 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:37:03.043161 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043166 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:37:03.043170 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:37:03.043175 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:37:03.043180 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043185 | orchestrator | 2025-05-14 02:37:03.043192 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:37:03.043197 | orchestrator | Wednesday 14 May 2025 02:30:01 +0000 (0:00:00.635) 0:06:18.445 ********* 2025-05-14 02:37:03.043202 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043207 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043212 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043216 | orchestrator | 2025-05-14 02:37:03.043221 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:37:03.043226 | orchestrator | Wednesday 14 May 2025 02:30:02 +0000 (0:00:00.900) 0:06:19.345 ********* 2025-05-14 02:37:03.043231 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043236 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043240 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043245 | orchestrator | 2025-05-14 02:37:03.043250 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:37:03.043255 | orchestrator | Wednesday 14 May 2025 02:30:02 +0000 (0:00:00.569) 0:06:19.915 ********* 2025-05-14 02:37:03.043260 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043265 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043269 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043274 | orchestrator | 2025-05-14 02:37:03.043279 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:37:03.043284 | orchestrator | Wednesday 14 May 2025 02:30:03 +0000 (0:00:00.854) 0:06:20.770 ********* 2025-05-14 02:37:03.043289 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043294 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043299 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043303 | orchestrator | 2025-05-14 02:37:03.043308 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-14 02:37:03.043313 | orchestrator | Wednesday 14 May 2025 02:30:04 +0000 (0:00:00.647) 0:06:21.418 ********* 2025-05-14 02:37:03.043318 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:37:03.043323 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:37:03.043328 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:37:03.043332 | orchestrator | 2025-05-14 02:37:03.043337 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-14 02:37:03.043342 | orchestrator | Wednesday 14 May 2025 02:30:05 +0000 (0:00:01.442) 0:06:22.861 ********* 2025-05-14 02:37:03.043347 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.043352 | orchestrator | 2025-05-14 02:37:03.043357 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-14 02:37:03.043362 | orchestrator | Wednesday 14 May 2025 02:30:06 +0000 (0:00:00.676) 0:06:23.537 ********* 2025-05-14 02:37:03.043367 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.043372 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.043376 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.043381 | orchestrator | 2025-05-14 02:37:03.043386 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-14 02:37:03.043396 | orchestrator | Wednesday 14 May 2025 02:30:07 +0000 (0:00:00.759) 0:06:24.297 ********* 2025-05-14 02:37:03.043401 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043406 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043410 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043415 | orchestrator | 2025-05-14 02:37:03.043420 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-14 02:37:03.043425 | orchestrator | Wednesday 14 May 2025 02:30:07 +0000 (0:00:00.586) 0:06:24.883 ********* 2025-05-14 02:37:03.043430 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:37:03.043434 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:37:03.043439 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:37:03.043444 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-14 02:37:03.043449 | orchestrator | 2025-05-14 02:37:03.043453 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-14 02:37:03.043458 | orchestrator | Wednesday 14 May 2025 02:30:16 +0000 (0:00:08.524) 0:06:33.407 ********* 2025-05-14 02:37:03.043477 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.043483 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.043488 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.043493 | orchestrator | 2025-05-14 02:37:03.043498 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-14 02:37:03.043503 | orchestrator | Wednesday 14 May 2025 02:30:16 +0000 (0:00:00.385) 0:06:33.793 ********* 2025-05-14 02:37:03.043508 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-14 02:37:03.043512 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 02:37:03.043517 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 02:37:03.043522 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-14 02:37:03.043527 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:37:03.043532 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:37:03.043537 | orchestrator | 2025-05-14 02:37:03.043542 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-14 02:37:03.043546 | orchestrator | Wednesday 14 May 2025 02:30:18 +0000 (0:00:02.007) 0:06:35.801 ********* 2025-05-14 02:37:03.043551 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-14 02:37:03.043556 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 02:37:03.043561 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 02:37:03.043566 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:37:03.043571 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-14 02:37:03.043576 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-14 02:37:03.043580 | orchestrator | 2025-05-14 02:37:03.043585 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-14 02:37:03.043590 | orchestrator | Wednesday 14 May 2025 02:30:20 +0000 (0:00:01.207) 0:06:37.009 ********* 2025-05-14 02:37:03.043616 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.043626 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.043633 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.043640 | orchestrator | 2025-05-14 02:37:03.043648 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-14 02:37:03.043657 | orchestrator | Wednesday 14 May 2025 02:30:20 +0000 (0:00:00.738) 0:06:37.747 ********* 2025-05-14 02:37:03.043664 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043669 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043673 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043678 | orchestrator | 2025-05-14 02:37:03.043683 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-14 02:37:03.043688 | orchestrator | Wednesday 14 May 2025 02:30:21 +0000 (0:00:00.620) 0:06:38.368 ********* 2025-05-14 02:37:03.043692 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043702 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043706 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043711 | orchestrator | 2025-05-14 02:37:03.043716 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-14 02:37:03.043721 | orchestrator | Wednesday 14 May 2025 02:30:21 +0000 (0:00:00.339) 0:06:38.708 ********* 2025-05-14 02:37:03.043725 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.043730 | orchestrator | 2025-05-14 02:37:03.043735 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-14 02:37:03.043740 | orchestrator | Wednesday 14 May 2025 02:30:22 +0000 (0:00:00.580) 0:06:39.288 ********* 2025-05-14 02:37:03.043745 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043750 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043755 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043760 | orchestrator | 2025-05-14 02:37:03.043765 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-14 02:37:03.043770 | orchestrator | Wednesday 14 May 2025 02:30:22 +0000 (0:00:00.607) 0:06:39.896 ********* 2025-05-14 02:37:03.043775 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043780 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043785 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.043790 | orchestrator | 2025-05-14 02:37:03.043795 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-14 02:37:03.043800 | orchestrator | Wednesday 14 May 2025 02:30:23 +0000 (0:00:00.332) 0:06:40.228 ********* 2025-05-14 02:37:03.043805 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.043810 | orchestrator | 2025-05-14 02:37:03.043815 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-14 02:37:03.043819 | orchestrator | Wednesday 14 May 2025 02:30:23 +0000 (0:00:00.528) 0:06:40.757 ********* 2025-05-14 02:37:03.043824 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.043829 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.043834 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.043839 | orchestrator | 2025-05-14 02:37:03.043844 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-14 02:37:03.043849 | orchestrator | Wednesday 14 May 2025 02:30:25 +0000 (0:00:01.448) 0:06:42.206 ********* 2025-05-14 02:37:03.043854 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.043859 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.043864 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.043868 | orchestrator | 2025-05-14 02:37:03.043873 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-14 02:37:03.043878 | orchestrator | Wednesday 14 May 2025 02:30:26 +0000 (0:00:01.197) 0:06:43.403 ********* 2025-05-14 02:37:03.043883 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.043888 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.043893 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.043898 | orchestrator | 2025-05-14 02:37:03.043903 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-14 02:37:03.043908 | orchestrator | Wednesday 14 May 2025 02:30:28 +0000 (0:00:01.683) 0:06:45.086 ********* 2025-05-14 02:37:03.043913 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.043918 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.043940 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.043946 | orchestrator | 2025-05-14 02:37:03.043951 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-14 02:37:03.043955 | orchestrator | Wednesday 14 May 2025 02:30:30 +0000 (0:00:02.200) 0:06:47.286 ********* 2025-05-14 02:37:03.043960 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.043965 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.043970 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-14 02:37:03.043980 | orchestrator | 2025-05-14 02:37:03.043985 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-14 02:37:03.043990 | orchestrator | Wednesday 14 May 2025 02:30:30 +0000 (0:00:00.677) 0:06:47.964 ********* 2025-05-14 02:37:03.043994 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-14 02:37:03.043999 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-14 02:37:03.044004 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:37:03.044009 | orchestrator | 2025-05-14 02:37:03.044014 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-14 02:37:03.044019 | orchestrator | Wednesday 14 May 2025 02:30:44 +0000 (0:00:13.430) 0:07:01.394 ********* 2025-05-14 02:37:03.044024 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:37:03.044029 | orchestrator | 2025-05-14 02:37:03.044034 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-14 02:37:03.044039 | orchestrator | Wednesday 14 May 2025 02:30:46 +0000 (0:00:01.708) 0:07:03.103 ********* 2025-05-14 02:37:03.044043 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.044048 | orchestrator | 2025-05-14 02:37:03.044056 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-14 02:37:03.044061 | orchestrator | Wednesday 14 May 2025 02:30:46 +0000 (0:00:00.539) 0:07:03.642 ********* 2025-05-14 02:37:03.044066 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.044070 | orchestrator | 2025-05-14 02:37:03.044075 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-14 02:37:03.044080 | orchestrator | Wednesday 14 May 2025 02:30:46 +0000 (0:00:00.315) 0:07:03.958 ********* 2025-05-14 02:37:03.044085 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-14 02:37:03.044090 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-14 02:37:03.044094 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-14 02:37:03.044099 | orchestrator | 2025-05-14 02:37:03.044104 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-14 02:37:03.044109 | orchestrator | Wednesday 14 May 2025 02:30:53 +0000 (0:00:06.523) 0:07:10.482 ********* 2025-05-14 02:37:03.044113 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-14 02:37:03.044118 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-14 02:37:03.044123 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-14 02:37:03.044127 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-14 02:37:03.044132 | orchestrator | 2025-05-14 02:37:03.044137 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:37:03.044142 | orchestrator | Wednesday 14 May 2025 02:30:58 +0000 (0:00:04.813) 0:07:15.295 ********* 2025-05-14 02:37:03.044146 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.044151 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.044156 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.044161 | orchestrator | 2025-05-14 02:37:03.044165 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-14 02:37:03.044170 | orchestrator | Wednesday 14 May 2025 02:30:59 +0000 (0:00:00.710) 0:07:16.005 ********* 2025-05-14 02:37:03.044175 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:03.044180 | orchestrator | 2025-05-14 02:37:03.044185 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-14 02:37:03.044189 | orchestrator | Wednesday 14 May 2025 02:30:59 +0000 (0:00:00.829) 0:07:16.835 ********* 2025-05-14 02:37:03.044194 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.044204 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.044208 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.044213 | orchestrator | 2025-05-14 02:37:03.044218 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-14 02:37:03.044223 | orchestrator | Wednesday 14 May 2025 02:31:00 +0000 (0:00:00.289) 0:07:17.124 ********* 2025-05-14 02:37:03.044228 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.044233 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.044237 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.044242 | orchestrator | 2025-05-14 02:37:03.044247 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-14 02:37:03.044252 | orchestrator | Wednesday 14 May 2025 02:31:01 +0000 (0:00:01.527) 0:07:18.652 ********* 2025-05-14 02:37:03.044257 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:37:03.044262 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:37:03.044266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:37:03.044271 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.044276 | orchestrator | 2025-05-14 02:37:03.044281 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-14 02:37:03.044286 | orchestrator | Wednesday 14 May 2025 02:31:02 +0000 (0:00:00.679) 0:07:19.331 ********* 2025-05-14 02:37:03.044291 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.044295 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.044300 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.044305 | orchestrator | 2025-05-14 02:37:03.044325 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:37:03.044330 | orchestrator | Wednesday 14 May 2025 02:31:02 +0000 (0:00:00.339) 0:07:19.670 ********* 2025-05-14 02:37:03.044335 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.044340 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.044345 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.044350 | orchestrator | 2025-05-14 02:37:03.044355 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-14 02:37:03.044360 | orchestrator | 2025-05-14 02:37:03.044365 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:37:03.044369 | orchestrator | Wednesday 14 May 2025 02:31:04 +0000 (0:00:02.088) 0:07:21.758 ********* 2025-05-14 02:37:03.044374 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.044379 | orchestrator | 2025-05-14 02:37:03.044384 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:37:03.044389 | orchestrator | Wednesday 14 May 2025 02:31:05 +0000 (0:00:00.844) 0:07:22.603 ********* 2025-05-14 02:37:03.044394 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044399 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044404 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044409 | orchestrator | 2025-05-14 02:37:03.044413 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:37:03.044418 | orchestrator | Wednesday 14 May 2025 02:31:05 +0000 (0:00:00.329) 0:07:22.932 ********* 2025-05-14 02:37:03.044423 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.044428 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.044433 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.044438 | orchestrator | 2025-05-14 02:37:03.044443 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:37:03.044452 | orchestrator | Wednesday 14 May 2025 02:31:06 +0000 (0:00:00.726) 0:07:23.659 ********* 2025-05-14 02:37:03.044457 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.044462 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.044467 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.044472 | orchestrator | 2025-05-14 02:37:03.044477 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:37:03.044481 | orchestrator | Wednesday 14 May 2025 02:31:07 +0000 (0:00:01.020) 0:07:24.679 ********* 2025-05-14 02:37:03.044491 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.044496 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.044500 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.044505 | orchestrator | 2025-05-14 02:37:03.044510 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:37:03.044515 | orchestrator | Wednesday 14 May 2025 02:31:08 +0000 (0:00:00.825) 0:07:25.505 ********* 2025-05-14 02:37:03.044520 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044524 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044529 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044534 | orchestrator | 2025-05-14 02:37:03.044539 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:37:03.044544 | orchestrator | Wednesday 14 May 2025 02:31:08 +0000 (0:00:00.321) 0:07:25.826 ********* 2025-05-14 02:37:03.044548 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044553 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044558 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044563 | orchestrator | 2025-05-14 02:37:03.044567 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:37:03.044572 | orchestrator | Wednesday 14 May 2025 02:31:09 +0000 (0:00:00.621) 0:07:26.448 ********* 2025-05-14 02:37:03.044577 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044582 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044586 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044591 | orchestrator | 2025-05-14 02:37:03.044607 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:37:03.044612 | orchestrator | Wednesday 14 May 2025 02:31:09 +0000 (0:00:00.321) 0:07:26.769 ********* 2025-05-14 02:37:03.044617 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044622 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044627 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044632 | orchestrator | 2025-05-14 02:37:03.044637 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:37:03.044642 | orchestrator | Wednesday 14 May 2025 02:31:10 +0000 (0:00:00.319) 0:07:27.089 ********* 2025-05-14 02:37:03.044647 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044651 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044656 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044661 | orchestrator | 2025-05-14 02:37:03.044666 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:37:03.044671 | orchestrator | Wednesday 14 May 2025 02:31:10 +0000 (0:00:00.299) 0:07:27.388 ********* 2025-05-14 02:37:03.044676 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044681 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044686 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044690 | orchestrator | 2025-05-14 02:37:03.044695 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:37:03.044700 | orchestrator | Wednesday 14 May 2025 02:31:10 +0000 (0:00:00.602) 0:07:27.990 ********* 2025-05-14 02:37:03.044705 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.044710 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.044715 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.044720 | orchestrator | 2025-05-14 02:37:03.044724 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:37:03.044729 | orchestrator | Wednesday 14 May 2025 02:31:11 +0000 (0:00:00.740) 0:07:28.731 ********* 2025-05-14 02:37:03.044734 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044739 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044744 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044749 | orchestrator | 2025-05-14 02:37:03.044754 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:37:03.044759 | orchestrator | Wednesday 14 May 2025 02:31:12 +0000 (0:00:00.352) 0:07:29.084 ********* 2025-05-14 02:37:03.044768 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044788 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044794 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044799 | orchestrator | 2025-05-14 02:37:03.044804 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:37:03.044809 | orchestrator | Wednesday 14 May 2025 02:31:12 +0000 (0:00:00.297) 0:07:29.381 ********* 2025-05-14 02:37:03.044814 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.044819 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.044824 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.044828 | orchestrator | 2025-05-14 02:37:03.044833 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:37:03.044838 | orchestrator | Wednesday 14 May 2025 02:31:12 +0000 (0:00:00.583) 0:07:29.964 ********* 2025-05-14 02:37:03.044843 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.044848 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.044853 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.044858 | orchestrator | 2025-05-14 02:37:03.044863 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:37:03.044868 | orchestrator | Wednesday 14 May 2025 02:31:13 +0000 (0:00:00.392) 0:07:30.357 ********* 2025-05-14 02:37:03.044873 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.044878 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.044883 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.044887 | orchestrator | 2025-05-14 02:37:03.044892 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:37:03.044897 | orchestrator | Wednesday 14 May 2025 02:31:13 +0000 (0:00:00.463) 0:07:30.820 ********* 2025-05-14 02:37:03.044902 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044907 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044912 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044917 | orchestrator | 2025-05-14 02:37:03.044921 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:37:03.044926 | orchestrator | Wednesday 14 May 2025 02:31:14 +0000 (0:00:00.364) 0:07:31.185 ********* 2025-05-14 02:37:03.044934 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044939 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044943 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044948 | orchestrator | 2025-05-14 02:37:03.044953 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:37:03.044958 | orchestrator | Wednesday 14 May 2025 02:31:14 +0000 (0:00:00.577) 0:07:31.763 ********* 2025-05-14 02:37:03.044963 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.044967 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.044972 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.044977 | orchestrator | 2025-05-14 02:37:03.044981 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:37:03.044986 | orchestrator | Wednesday 14 May 2025 02:31:15 +0000 (0:00:00.321) 0:07:32.084 ********* 2025-05-14 02:37:03.044991 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.044996 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.045001 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.045005 | orchestrator | 2025-05-14 02:37:03.045010 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:37:03.045015 | orchestrator | Wednesday 14 May 2025 02:31:15 +0000 (0:00:00.422) 0:07:32.507 ********* 2025-05-14 02:37:03.045020 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045024 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045029 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045034 | orchestrator | 2025-05-14 02:37:03.045039 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:37:03.045043 | orchestrator | Wednesday 14 May 2025 02:31:15 +0000 (0:00:00.375) 0:07:32.882 ********* 2025-05-14 02:37:03.045048 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045053 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045062 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045067 | orchestrator | 2025-05-14 02:37:03.045072 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:37:03.045077 | orchestrator | Wednesday 14 May 2025 02:31:16 +0000 (0:00:00.646) 0:07:33.529 ********* 2025-05-14 02:37:03.045082 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045087 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045092 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045097 | orchestrator | 2025-05-14 02:37:03.045102 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:37:03.045107 | orchestrator | Wednesday 14 May 2025 02:31:16 +0000 (0:00:00.409) 0:07:33.939 ********* 2025-05-14 02:37:03.045112 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045116 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045121 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045126 | orchestrator | 2025-05-14 02:37:03.045131 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:37:03.045136 | orchestrator | Wednesday 14 May 2025 02:31:17 +0000 (0:00:00.334) 0:07:34.273 ********* 2025-05-14 02:37:03.045141 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045146 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045151 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045155 | orchestrator | 2025-05-14 02:37:03.045160 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:37:03.045165 | orchestrator | Wednesday 14 May 2025 02:31:17 +0000 (0:00:00.342) 0:07:34.615 ********* 2025-05-14 02:37:03.045170 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045175 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045180 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045185 | orchestrator | 2025-05-14 02:37:03.045190 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:37:03.045194 | orchestrator | Wednesday 14 May 2025 02:31:18 +0000 (0:00:00.595) 0:07:35.211 ********* 2025-05-14 02:37:03.045199 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045204 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045209 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045214 | orchestrator | 2025-05-14 02:37:03.045219 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:37:03.045224 | orchestrator | Wednesday 14 May 2025 02:31:18 +0000 (0:00:00.365) 0:07:35.576 ********* 2025-05-14 02:37:03.045229 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045249 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045254 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045259 | orchestrator | 2025-05-14 02:37:03.045264 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:37:03.045269 | orchestrator | Wednesday 14 May 2025 02:31:18 +0000 (0:00:00.364) 0:07:35.941 ********* 2025-05-14 02:37:03.045274 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045279 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045284 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045289 | orchestrator | 2025-05-14 02:37:03.045293 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:37:03.045298 | orchestrator | Wednesday 14 May 2025 02:31:19 +0000 (0:00:00.326) 0:07:36.267 ********* 2025-05-14 02:37:03.045303 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045308 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045313 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045318 | orchestrator | 2025-05-14 02:37:03.045323 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:37:03.045328 | orchestrator | Wednesday 14 May 2025 02:31:19 +0000 (0:00:00.641) 0:07:36.909 ********* 2025-05-14 02:37:03.045333 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045342 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045347 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045351 | orchestrator | 2025-05-14 02:37:03.045356 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:37:03.045361 | orchestrator | Wednesday 14 May 2025 02:31:20 +0000 (0:00:00.339) 0:07:37.249 ********* 2025-05-14 02:37:03.045366 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045371 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045375 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045380 | orchestrator | 2025-05-14 02:37:03.045388 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:37:03.045393 | orchestrator | Wednesday 14 May 2025 02:31:20 +0000 (0:00:00.325) 0:07:37.574 ********* 2025-05-14 02:37:03.045397 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.045402 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.045407 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.045412 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.045416 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045421 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045426 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.045430 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.045435 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045440 | orchestrator | 2025-05-14 02:37:03.045445 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:37:03.045449 | orchestrator | Wednesday 14 May 2025 02:31:20 +0000 (0:00:00.389) 0:07:37.963 ********* 2025-05-14 02:37:03.045454 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 02:37:03.045459 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 02:37:03.045464 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045468 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 02:37:03.045473 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 02:37:03.045478 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045483 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 02:37:03.045488 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 02:37:03.045493 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045498 | orchestrator | 2025-05-14 02:37:03.045503 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:37:03.045508 | orchestrator | Wednesday 14 May 2025 02:31:21 +0000 (0:00:00.644) 0:07:38.608 ********* 2025-05-14 02:37:03.045512 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045517 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045522 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045527 | orchestrator | 2025-05-14 02:37:03.045532 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:37:03.045537 | orchestrator | Wednesday 14 May 2025 02:31:21 +0000 (0:00:00.380) 0:07:38.989 ********* 2025-05-14 02:37:03.045542 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045547 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045552 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045557 | orchestrator | 2025-05-14 02:37:03.045561 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:37:03.045566 | orchestrator | Wednesday 14 May 2025 02:31:22 +0000 (0:00:00.402) 0:07:39.391 ********* 2025-05-14 02:37:03.045571 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045576 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045581 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045586 | orchestrator | 2025-05-14 02:37:03.045591 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:37:03.045630 | orchestrator | Wednesday 14 May 2025 02:31:22 +0000 (0:00:00.346) 0:07:39.737 ********* 2025-05-14 02:37:03.045635 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045640 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045645 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045649 | orchestrator | 2025-05-14 02:37:03.045654 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:37:03.045659 | orchestrator | Wednesday 14 May 2025 02:31:23 +0000 (0:00:00.625) 0:07:40.363 ********* 2025-05-14 02:37:03.045664 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045669 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045673 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045678 | orchestrator | 2025-05-14 02:37:03.045683 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:37:03.045705 | orchestrator | Wednesday 14 May 2025 02:31:23 +0000 (0:00:00.334) 0:07:40.698 ********* 2025-05-14 02:37:03.045710 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045715 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045719 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045724 | orchestrator | 2025-05-14 02:37:03.045728 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:37:03.045733 | orchestrator | Wednesday 14 May 2025 02:31:24 +0000 (0:00:00.343) 0:07:41.042 ********* 2025-05-14 02:37:03.045737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.045742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.045747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.045751 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045756 | orchestrator | 2025-05-14 02:37:03.045760 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:37:03.045765 | orchestrator | Wednesday 14 May 2025 02:31:24 +0000 (0:00:00.425) 0:07:41.467 ********* 2025-05-14 02:37:03.045769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.045774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.045778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.045783 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045787 | orchestrator | 2025-05-14 02:37:03.045792 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:37:03.045796 | orchestrator | Wednesday 14 May 2025 02:31:24 +0000 (0:00:00.436) 0:07:41.904 ********* 2025-05-14 02:37:03.045801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.045806 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.045810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.045818 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045822 | orchestrator | 2025-05-14 02:37:03.045827 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.045832 | orchestrator | Wednesday 14 May 2025 02:31:25 +0000 (0:00:00.774) 0:07:42.679 ********* 2025-05-14 02:37:03.045836 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045841 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045845 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045849 | orchestrator | 2025-05-14 02:37:03.045854 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:37:03.045859 | orchestrator | Wednesday 14 May 2025 02:31:26 +0000 (0:00:00.653) 0:07:43.332 ********* 2025-05-14 02:37:03.045863 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.045868 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045872 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.045877 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045881 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.045892 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045896 | orchestrator | 2025-05-14 02:37:03.045901 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:37:03.045906 | orchestrator | Wednesday 14 May 2025 02:31:26 +0000 (0:00:00.453) 0:07:43.785 ********* 2025-05-14 02:37:03.045910 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045915 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045919 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045924 | orchestrator | 2025-05-14 02:37:03.045928 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.045933 | orchestrator | Wednesday 14 May 2025 02:31:27 +0000 (0:00:00.353) 0:07:44.139 ********* 2025-05-14 02:37:03.045937 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045942 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045946 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045951 | orchestrator | 2025-05-14 02:37:03.045956 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:37:03.045960 | orchestrator | Wednesday 14 May 2025 02:31:27 +0000 (0:00:00.339) 0:07:44.478 ********* 2025-05-14 02:37:03.045965 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.045969 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.045974 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.045978 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.045983 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.045987 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.045991 | orchestrator | 2025-05-14 02:37:03.045996 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:37:03.046001 | orchestrator | Wednesday 14 May 2025 02:31:28 +0000 (0:00:00.893) 0:07:45.373 ********* 2025-05-14 02:37:03.046005 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.046010 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046030 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.046035 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046040 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.046045 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046049 | orchestrator | 2025-05-14 02:37:03.046054 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:37:03.046058 | orchestrator | Wednesday 14 May 2025 02:31:28 +0000 (0:00:00.372) 0:07:45.745 ********* 2025-05-14 02:37:03.046063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.046068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.046072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.046093 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:37:03.046098 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:37:03.046103 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046107 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:37:03.046112 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046116 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:37:03.046121 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:37:03.046125 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:37:03.046130 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046134 | orchestrator | 2025-05-14 02:37:03.046139 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:37:03.046143 | orchestrator | Wednesday 14 May 2025 02:31:29 +0000 (0:00:00.707) 0:07:46.453 ********* 2025-05-14 02:37:03.046152 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046157 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046162 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046166 | orchestrator | 2025-05-14 02:37:03.046171 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:37:03.046176 | orchestrator | Wednesday 14 May 2025 02:31:30 +0000 (0:00:00.891) 0:07:47.344 ********* 2025-05-14 02:37:03.046180 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.046185 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046189 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:37:03.046194 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046198 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:37:03.046203 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046207 | orchestrator | 2025-05-14 02:37:03.046212 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:37:03.046220 | orchestrator | Wednesday 14 May 2025 02:31:30 +0000 (0:00:00.541) 0:07:47.886 ********* 2025-05-14 02:37:03.046224 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046229 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046233 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046238 | orchestrator | 2025-05-14 02:37:03.046242 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:37:03.046247 | orchestrator | Wednesday 14 May 2025 02:31:31 +0000 (0:00:00.841) 0:07:48.727 ********* 2025-05-14 02:37:03.046252 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046256 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046261 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046265 | orchestrator | 2025-05-14 02:37:03.046270 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-14 02:37:03.046274 | orchestrator | Wednesday 14 May 2025 02:31:32 +0000 (0:00:00.582) 0:07:49.310 ********* 2025-05-14 02:37:03.046279 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.046283 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.046288 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.046292 | orchestrator | 2025-05-14 02:37:03.046297 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-14 02:37:03.046301 | orchestrator | Wednesday 14 May 2025 02:31:32 +0000 (0:00:00.647) 0:07:49.957 ********* 2025-05-14 02:37:03.046306 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 02:37:03.046310 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:37:03.046315 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:37:03.046319 | orchestrator | 2025-05-14 02:37:03.046324 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-14 02:37:03.046328 | orchestrator | Wednesday 14 May 2025 02:31:33 +0000 (0:00:00.680) 0:07:50.637 ********* 2025-05-14 02:37:03.046333 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.046337 | orchestrator | 2025-05-14 02:37:03.046342 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-14 02:37:03.046347 | orchestrator | Wednesday 14 May 2025 02:31:34 +0000 (0:00:00.553) 0:07:51.190 ********* 2025-05-14 02:37:03.046351 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046356 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046360 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046365 | orchestrator | 2025-05-14 02:37:03.046369 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-14 02:37:03.046374 | orchestrator | Wednesday 14 May 2025 02:31:34 +0000 (0:00:00.559) 0:07:51.750 ********* 2025-05-14 02:37:03.046378 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046383 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046393 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046401 | orchestrator | 2025-05-14 02:37:03.046408 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-14 02:37:03.046416 | orchestrator | Wednesday 14 May 2025 02:31:35 +0000 (0:00:00.304) 0:07:52.054 ********* 2025-05-14 02:37:03.046428 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046437 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046444 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046452 | orchestrator | 2025-05-14 02:37:03.046459 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-14 02:37:03.046466 | orchestrator | Wednesday 14 May 2025 02:31:35 +0000 (0:00:00.307) 0:07:52.361 ********* 2025-05-14 02:37:03.046473 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046479 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046487 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046494 | orchestrator | 2025-05-14 02:37:03.046502 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-14 02:37:03.046509 | orchestrator | Wednesday 14 May 2025 02:31:35 +0000 (0:00:00.310) 0:07:52.672 ********* 2025-05-14 02:37:03.046516 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.046521 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.046526 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.046530 | orchestrator | 2025-05-14 02:37:03.046555 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-14 02:37:03.046561 | orchestrator | Wednesday 14 May 2025 02:31:36 +0000 (0:00:00.829) 0:07:53.502 ********* 2025-05-14 02:37:03.046565 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.046570 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.046575 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.046579 | orchestrator | 2025-05-14 02:37:03.046584 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-14 02:37:03.046589 | orchestrator | Wednesday 14 May 2025 02:31:36 +0000 (0:00:00.302) 0:07:53.804 ********* 2025-05-14 02:37:03.046610 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-14 02:37:03.046615 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-14 02:37:03.046620 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-14 02:37:03.046625 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-14 02:37:03.046629 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-14 02:37:03.046634 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-14 02:37:03.046639 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-14 02:37:03.046643 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-14 02:37:03.046648 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-14 02:37:03.046653 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-14 02:37:03.046657 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-14 02:37:03.046662 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-14 02:37:03.046667 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-14 02:37:03.046672 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-14 02:37:03.046676 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-14 02:37:03.046681 | orchestrator | 2025-05-14 02:37:03.046685 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-14 02:37:03.046696 | orchestrator | Wednesday 14 May 2025 02:31:39 +0000 (0:00:03.042) 0:07:56.846 ********* 2025-05-14 02:37:03.046701 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046705 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046710 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046715 | orchestrator | 2025-05-14 02:37:03.046719 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-14 02:37:03.046724 | orchestrator | Wednesday 14 May 2025 02:31:40 +0000 (0:00:00.277) 0:07:57.124 ********* 2025-05-14 02:37:03.046728 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.046733 | orchestrator | 2025-05-14 02:37:03.046737 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-14 02:37:03.046742 | orchestrator | Wednesday 14 May 2025 02:31:40 +0000 (0:00:00.709) 0:07:57.834 ********* 2025-05-14 02:37:03.046746 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-14 02:37:03.046751 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-14 02:37:03.046755 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-14 02:37:03.046760 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-14 02:37:03.046764 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-14 02:37:03.046769 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-14 02:37:03.046773 | orchestrator | 2025-05-14 02:37:03.046778 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-14 02:37:03.046782 | orchestrator | Wednesday 14 May 2025 02:31:41 +0000 (0:00:00.935) 0:07:58.770 ********* 2025-05-14 02:37:03.046787 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:37:03.046791 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.046796 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 02:37:03.046801 | orchestrator | 2025-05-14 02:37:03.046805 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-14 02:37:03.046809 | orchestrator | Wednesday 14 May 2025 02:31:43 +0000 (0:00:01.899) 0:08:00.669 ********* 2025-05-14 02:37:03.046814 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:37:03.046818 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.046823 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.046827 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:37:03.046832 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:37:03.046836 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.046841 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:37:03.046845 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:37:03.046850 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.046855 | orchestrator | 2025-05-14 02:37:03.046859 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-14 02:37:03.046864 | orchestrator | Wednesday 14 May 2025 02:31:45 +0000 (0:00:01.621) 0:08:02.291 ********* 2025-05-14 02:37:03.046887 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:37:03.046893 | orchestrator | 2025-05-14 02:37:03.046897 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-14 02:37:03.046902 | orchestrator | Wednesday 14 May 2025 02:31:47 +0000 (0:00:02.086) 0:08:04.378 ********* 2025-05-14 02:37:03.046907 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.046911 | orchestrator | 2025-05-14 02:37:03.046916 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-14 02:37:03.046920 | orchestrator | Wednesday 14 May 2025 02:31:48 +0000 (0:00:00.787) 0:08:05.165 ********* 2025-05-14 02:37:03.046925 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046934 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046938 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046943 | orchestrator | 2025-05-14 02:37:03.046947 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-14 02:37:03.046952 | orchestrator | Wednesday 14 May 2025 02:31:48 +0000 (0:00:00.316) 0:08:05.481 ********* 2025-05-14 02:37:03.046957 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.046961 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.046988 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.046993 | orchestrator | 2025-05-14 02:37:03.046998 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-14 02:37:03.047003 | orchestrator | Wednesday 14 May 2025 02:31:48 +0000 (0:00:00.327) 0:08:05.809 ********* 2025-05-14 02:37:03.047008 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047012 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047017 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.047021 | orchestrator | 2025-05-14 02:37:03.047029 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-14 02:37:03.047033 | orchestrator | Wednesday 14 May 2025 02:31:49 +0000 (0:00:00.307) 0:08:06.116 ********* 2025-05-14 02:37:03.047038 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.047043 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.047047 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.047052 | orchestrator | 2025-05-14 02:37:03.047056 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-14 02:37:03.047061 | orchestrator | Wednesday 14 May 2025 02:31:49 +0000 (0:00:00.619) 0:08:06.736 ********* 2025-05-14 02:37:03.047065 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.047070 | orchestrator | 2025-05-14 02:37:03.047074 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-14 02:37:03.047079 | orchestrator | Wednesday 14 May 2025 02:31:50 +0000 (0:00:00.648) 0:08:07.385 ********* 2025-05-14 02:37:03.047084 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ea3c2360-3d2e-5360-8839-85b817b77bc3', 'data_vg': 'ceph-ea3c2360-3d2e-5360-8839-85b817b77bc3'}) 2025-05-14 02:37:03.047089 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-caf94b5f-07a0-5316-9d7c-8f668ab64c5d', 'data_vg': 'ceph-caf94b5f-07a0-5316-9d7c-8f668ab64c5d'}) 2025-05-14 02:37:03.047094 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-03d77871-dede-5752-b4dd-afb6f86d8bca', 'data_vg': 'ceph-03d77871-dede-5752-b4dd-afb6f86d8bca'}) 2025-05-14 02:37:03.047098 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fecac30f-087c-5b0b-83ef-f9d2b642a995', 'data_vg': 'ceph-fecac30f-087c-5b0b-83ef-f9d2b642a995'}) 2025-05-14 02:37:03.047103 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a0a91196-50f5-599a-8231-3d981ca1eca9', 'data_vg': 'ceph-a0a91196-50f5-599a-8231-3d981ca1eca9'}) 2025-05-14 02:37:03.047108 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0c7e27ae-f126-51b5-99e7-7e9908cad598', 'data_vg': 'ceph-0c7e27ae-f126-51b5-99e7-7e9908cad598'}) 2025-05-14 02:37:03.047112 | orchestrator | 2025-05-14 02:37:03.047117 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-14 02:37:03.047122 | orchestrator | Wednesday 14 May 2025 02:32:32 +0000 (0:00:41.649) 0:08:49.035 ********* 2025-05-14 02:37:03.047126 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047131 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047135 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.047140 | orchestrator | 2025-05-14 02:37:03.047145 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-14 02:37:03.047149 | orchestrator | Wednesday 14 May 2025 02:32:32 +0000 (0:00:00.547) 0:08:49.583 ********* 2025-05-14 02:37:03.047154 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.047162 | orchestrator | 2025-05-14 02:37:03.047167 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-14 02:37:03.047171 | orchestrator | Wednesday 14 May 2025 02:32:33 +0000 (0:00:00.608) 0:08:50.191 ********* 2025-05-14 02:37:03.047176 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.047180 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.047185 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.047190 | orchestrator | 2025-05-14 02:37:03.047194 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-14 02:37:03.047199 | orchestrator | Wednesday 14 May 2025 02:32:33 +0000 (0:00:00.651) 0:08:50.842 ********* 2025-05-14 02:37:03.047203 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.047208 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.047212 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.047217 | orchestrator | 2025-05-14 02:37:03.047239 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-14 02:37:03.047244 | orchestrator | Wednesday 14 May 2025 02:32:35 +0000 (0:00:01.620) 0:08:52.462 ********* 2025-05-14 02:37:03.047249 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.047253 | orchestrator | 2025-05-14 02:37:03.047258 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-14 02:37:03.047262 | orchestrator | Wednesday 14 May 2025 02:32:36 +0000 (0:00:00.565) 0:08:53.027 ********* 2025-05-14 02:37:03.047267 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.047272 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.047276 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.047281 | orchestrator | 2025-05-14 02:37:03.047285 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-14 02:37:03.047290 | orchestrator | Wednesday 14 May 2025 02:32:37 +0000 (0:00:01.408) 0:08:54.435 ********* 2025-05-14 02:37:03.047294 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.047299 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.047304 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.047309 | orchestrator | 2025-05-14 02:37:03.047313 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-14 02:37:03.047318 | orchestrator | Wednesday 14 May 2025 02:32:38 +0000 (0:00:01.497) 0:08:55.933 ********* 2025-05-14 02:37:03.047323 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.047327 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.047332 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.047337 | orchestrator | 2025-05-14 02:37:03.047341 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-14 02:37:03.047346 | orchestrator | Wednesday 14 May 2025 02:32:40 +0000 (0:00:01.755) 0:08:57.688 ********* 2025-05-14 02:37:03.047351 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047355 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047363 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.047368 | orchestrator | 2025-05-14 02:37:03.047373 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-14 02:37:03.047377 | orchestrator | Wednesday 14 May 2025 02:32:41 +0000 (0:00:00.314) 0:08:58.003 ********* 2025-05-14 02:37:03.047382 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047387 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047391 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.047396 | orchestrator | 2025-05-14 02:37:03.047401 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-14 02:37:03.047405 | orchestrator | Wednesday 14 May 2025 02:32:41 +0000 (0:00:00.601) 0:08:58.604 ********* 2025-05-14 02:37:03.047410 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-14 02:37:03.047415 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-05-14 02:37:03.047420 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-05-14 02:37:03.047428 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-05-14 02:37:03.047433 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-05-14 02:37:03.047438 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-05-14 02:37:03.047442 | orchestrator | 2025-05-14 02:37:03.047447 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-14 02:37:03.047452 | orchestrator | Wednesday 14 May 2025 02:32:42 +0000 (0:00:01.009) 0:08:59.613 ********* 2025-05-14 02:37:03.047457 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-14 02:37:03.047461 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-05-14 02:37:03.047466 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-05-14 02:37:03.047470 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-14 02:37:03.047475 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-14 02:37:03.047480 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-14 02:37:03.047484 | orchestrator | 2025-05-14 02:37:03.047489 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-14 02:37:03.047494 | orchestrator | Wednesday 14 May 2025 02:32:46 +0000 (0:00:03.638) 0:09:03.252 ********* 2025-05-14 02:37:03.047498 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047503 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047507 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:37:03.047512 | orchestrator | 2025-05-14 02:37:03.047517 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-14 02:37:03.047522 | orchestrator | Wednesday 14 May 2025 02:32:49 +0000 (0:00:03.207) 0:09:06.460 ********* 2025-05-14 02:37:03.047526 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047531 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047536 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-14 02:37:03.047540 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:37:03.047545 | orchestrator | 2025-05-14 02:37:03.047550 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-14 02:37:03.047554 | orchestrator | Wednesday 14 May 2025 02:33:01 +0000 (0:00:12.529) 0:09:18.989 ********* 2025-05-14 02:37:03.047559 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047564 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047568 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.047573 | orchestrator | 2025-05-14 02:37:03.047578 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-14 02:37:03.047582 | orchestrator | Wednesday 14 May 2025 02:33:02 +0000 (0:00:00.531) 0:09:19.521 ********* 2025-05-14 02:37:03.047587 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047621 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047627 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.047631 | orchestrator | 2025-05-14 02:37:03.047636 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:37:03.047640 | orchestrator | Wednesday 14 May 2025 02:33:03 +0000 (0:00:01.166) 0:09:20.687 ********* 2025-05-14 02:37:03.047645 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.047649 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.047654 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.047659 | orchestrator | 2025-05-14 02:37:03.047664 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-14 02:37:03.047688 | orchestrator | Wednesday 14 May 2025 02:33:04 +0000 (0:00:00.664) 0:09:21.352 ********* 2025-05-14 02:37:03.047693 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.047698 | orchestrator | 2025-05-14 02:37:03.047702 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-14 02:37:03.047707 | orchestrator | Wednesday 14 May 2025 02:33:05 +0000 (0:00:00.836) 0:09:22.189 ********* 2025-05-14 02:37:03.047712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.047721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.047725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.047730 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047735 | orchestrator | 2025-05-14 02:37:03.047739 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-14 02:37:03.047744 | orchestrator | Wednesday 14 May 2025 02:33:05 +0000 (0:00:00.432) 0:09:22.621 ********* 2025-05-14 02:37:03.047748 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047753 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047758 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.047762 | orchestrator | 2025-05-14 02:37:03.047767 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-14 02:37:03.047772 | orchestrator | Wednesday 14 May 2025 02:33:05 +0000 (0:00:00.303) 0:09:22.925 ********* 2025-05-14 02:37:03.047776 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047781 | orchestrator | 2025-05-14 02:37:03.047786 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-14 02:37:03.047790 | orchestrator | Wednesday 14 May 2025 02:33:06 +0000 (0:00:00.234) 0:09:23.159 ********* 2025-05-14 02:37:03.047795 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047803 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047807 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.047811 | orchestrator | 2025-05-14 02:37:03.047815 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-14 02:37:03.047819 | orchestrator | Wednesday 14 May 2025 02:33:06 +0000 (0:00:00.607) 0:09:23.767 ********* 2025-05-14 02:37:03.047824 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047828 | orchestrator | 2025-05-14 02:37:03.047832 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-14 02:37:03.047836 | orchestrator | Wednesday 14 May 2025 02:33:07 +0000 (0:00:00.259) 0:09:24.026 ********* 2025-05-14 02:37:03.047840 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047844 | orchestrator | 2025-05-14 02:37:03.047848 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-14 02:37:03.047853 | orchestrator | Wednesday 14 May 2025 02:33:07 +0000 (0:00:00.260) 0:09:24.287 ********* 2025-05-14 02:37:03.047857 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047861 | orchestrator | 2025-05-14 02:37:03.047865 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-14 02:37:03.047869 | orchestrator | Wednesday 14 May 2025 02:33:07 +0000 (0:00:00.129) 0:09:24.417 ********* 2025-05-14 02:37:03.047874 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047878 | orchestrator | 2025-05-14 02:37:03.047882 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-14 02:37:03.047886 | orchestrator | Wednesday 14 May 2025 02:33:07 +0000 (0:00:00.238) 0:09:24.655 ********* 2025-05-14 02:37:03.047890 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047894 | orchestrator | 2025-05-14 02:37:03.047898 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-14 02:37:03.047902 | orchestrator | Wednesday 14 May 2025 02:33:07 +0000 (0:00:00.227) 0:09:24.882 ********* 2025-05-14 02:37:03.047907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.047911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.047915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.047919 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047923 | orchestrator | 2025-05-14 02:37:03.047928 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-14 02:37:03.047932 | orchestrator | Wednesday 14 May 2025 02:33:08 +0000 (0:00:00.409) 0:09:25.291 ********* 2025-05-14 02:37:03.047936 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047940 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.047949 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.047953 | orchestrator | 2025-05-14 02:37:03.047957 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-14 02:37:03.047961 | orchestrator | Wednesday 14 May 2025 02:33:08 +0000 (0:00:00.379) 0:09:25.670 ********* 2025-05-14 02:37:03.047966 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047970 | orchestrator | 2025-05-14 02:37:03.047974 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-14 02:37:03.047978 | orchestrator | Wednesday 14 May 2025 02:33:09 +0000 (0:00:00.874) 0:09:26.545 ********* 2025-05-14 02:37:03.047982 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.047986 | orchestrator | 2025-05-14 02:37:03.047990 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:37:03.047995 | orchestrator | Wednesday 14 May 2025 02:33:09 +0000 (0:00:00.257) 0:09:26.803 ********* 2025-05-14 02:37:03.047999 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.048003 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.048007 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.048012 | orchestrator | 2025-05-14 02:37:03.048016 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-14 02:37:03.048020 | orchestrator | 2025-05-14 02:37:03.048024 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:37:03.048028 | orchestrator | Wednesday 14 May 2025 02:33:12 +0000 (0:00:02.943) 0:09:29.747 ********* 2025-05-14 02:37:03.048050 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.048056 | orchestrator | 2025-05-14 02:37:03.048061 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:37:03.048065 | orchestrator | Wednesday 14 May 2025 02:33:14 +0000 (0:00:01.268) 0:09:31.015 ********* 2025-05-14 02:37:03.048069 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048073 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.048077 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048082 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.048086 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048090 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.048094 | orchestrator | 2025-05-14 02:37:03.048098 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:37:03.048103 | orchestrator | Wednesday 14 May 2025 02:33:14 +0000 (0:00:00.774) 0:09:31.790 ********* 2025-05-14 02:37:03.048107 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048111 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048115 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048119 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.048124 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.048128 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.048132 | orchestrator | 2025-05-14 02:37:03.048136 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:37:03.048140 | orchestrator | Wednesday 14 May 2025 02:33:16 +0000 (0:00:01.232) 0:09:33.022 ********* 2025-05-14 02:37:03.048145 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048149 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048153 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048157 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.048161 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.048166 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.048170 | orchestrator | 2025-05-14 02:37:03.048177 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:37:03.048185 | orchestrator | Wednesday 14 May 2025 02:33:16 +0000 (0:00:00.935) 0:09:33.958 ********* 2025-05-14 02:37:03.048192 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048199 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048206 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048220 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.048226 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.048232 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.048239 | orchestrator | 2025-05-14 02:37:03.048246 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:37:03.048251 | orchestrator | Wednesday 14 May 2025 02:33:18 +0000 (0:00:01.200) 0:09:35.158 ********* 2025-05-14 02:37:03.048255 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048259 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048263 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.048268 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048272 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.048276 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.048280 | orchestrator | 2025-05-14 02:37:03.048284 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:37:03.048288 | orchestrator | Wednesday 14 May 2025 02:33:19 +0000 (0:00:01.023) 0:09:36.182 ********* 2025-05-14 02:37:03.048292 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048296 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048300 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048304 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048308 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048313 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048317 | orchestrator | 2025-05-14 02:37:03.048321 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:37:03.048325 | orchestrator | Wednesday 14 May 2025 02:33:19 +0000 (0:00:00.648) 0:09:36.831 ********* 2025-05-14 02:37:03.048329 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048333 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048337 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048342 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048346 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048350 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048354 | orchestrator | 2025-05-14 02:37:03.048358 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:37:03.048362 | orchestrator | Wednesday 14 May 2025 02:33:20 +0000 (0:00:00.934) 0:09:37.766 ********* 2025-05-14 02:37:03.048366 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048370 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048375 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048379 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048383 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048387 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048391 | orchestrator | 2025-05-14 02:37:03.048395 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:37:03.048399 | orchestrator | Wednesday 14 May 2025 02:33:21 +0000 (0:00:00.673) 0:09:38.439 ********* 2025-05-14 02:37:03.048403 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048407 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048411 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048415 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048420 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048424 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048428 | orchestrator | 2025-05-14 02:37:03.048432 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:37:03.048436 | orchestrator | Wednesday 14 May 2025 02:33:22 +0000 (0:00:00.961) 0:09:39.400 ********* 2025-05-14 02:37:03.048440 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048445 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048449 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048453 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048457 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048461 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048465 | orchestrator | 2025-05-14 02:37:03.048472 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:37:03.048477 | orchestrator | Wednesday 14 May 2025 02:33:23 +0000 (0:00:00.631) 0:09:40.032 ********* 2025-05-14 02:37:03.048481 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.048485 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.048505 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.048510 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.048514 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.048518 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.048522 | orchestrator | 2025-05-14 02:37:03.048526 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:37:03.048530 | orchestrator | Wednesday 14 May 2025 02:33:24 +0000 (0:00:01.631) 0:09:41.664 ********* 2025-05-14 02:37:03.048535 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048539 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048543 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048547 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048551 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048555 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048559 | orchestrator | 2025-05-14 02:37:03.048564 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:37:03.048568 | orchestrator | Wednesday 14 May 2025 02:33:25 +0000 (0:00:00.816) 0:09:42.480 ********* 2025-05-14 02:37:03.048572 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.048576 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.048580 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.048584 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048588 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048608 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048616 | orchestrator | 2025-05-14 02:37:03.048622 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:37:03.048628 | orchestrator | Wednesday 14 May 2025 02:33:26 +0000 (0:00:00.992) 0:09:43.473 ********* 2025-05-14 02:37:03.048635 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048641 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048645 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048649 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.048653 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.048658 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.048662 | orchestrator | 2025-05-14 02:37:03.048670 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:37:03.048674 | orchestrator | Wednesday 14 May 2025 02:33:27 +0000 (0:00:00.640) 0:09:44.113 ********* 2025-05-14 02:37:03.048679 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048683 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048687 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048691 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.048695 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.048699 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.048704 | orchestrator | 2025-05-14 02:37:03.048708 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:37:03.048712 | orchestrator | Wednesday 14 May 2025 02:33:28 +0000 (0:00:00.914) 0:09:45.028 ********* 2025-05-14 02:37:03.048716 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048720 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048724 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048729 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.048733 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.048737 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.048741 | orchestrator | 2025-05-14 02:37:03.048745 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:37:03.048749 | orchestrator | Wednesday 14 May 2025 02:33:28 +0000 (0:00:00.681) 0:09:45.709 ********* 2025-05-14 02:37:03.048753 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048761 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048766 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048770 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048774 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048778 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048782 | orchestrator | 2025-05-14 02:37:03.048786 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:37:03.048791 | orchestrator | Wednesday 14 May 2025 02:33:29 +0000 (0:00:00.897) 0:09:46.607 ********* 2025-05-14 02:37:03.048795 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048799 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048803 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048807 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048814 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048821 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048832 | orchestrator | 2025-05-14 02:37:03.048838 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:37:03.048845 | orchestrator | Wednesday 14 May 2025 02:33:30 +0000 (0:00:00.640) 0:09:47.248 ********* 2025-05-14 02:37:03.048851 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.048857 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.048863 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.048870 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048876 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048882 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.048888 | orchestrator | 2025-05-14 02:37:03.048894 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:37:03.048900 | orchestrator | Wednesday 14 May 2025 02:33:31 +0000 (0:00:00.898) 0:09:48.146 ********* 2025-05-14 02:37:03.048906 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.048912 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.048918 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.048923 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.048929 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.048935 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.048942 | orchestrator | 2025-05-14 02:37:03.048949 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:37:03.048955 | orchestrator | Wednesday 14 May 2025 02:33:31 +0000 (0:00:00.659) 0:09:48.806 ********* 2025-05-14 02:37:03.048961 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.048968 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.048975 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.048981 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.048988 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.048996 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049001 | orchestrator | 2025-05-14 02:37:03.049005 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:37:03.049009 | orchestrator | Wednesday 14 May 2025 02:33:32 +0000 (0:00:00.871) 0:09:49.677 ********* 2025-05-14 02:37:03.049037 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049041 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049046 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049050 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049054 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049058 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049062 | orchestrator | 2025-05-14 02:37:03.049066 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:37:03.049071 | orchestrator | Wednesday 14 May 2025 02:33:33 +0000 (0:00:00.710) 0:09:50.388 ********* 2025-05-14 02:37:03.049075 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049079 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049083 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049087 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049091 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049101 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049105 | orchestrator | 2025-05-14 02:37:03.049109 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:37:03.049113 | orchestrator | Wednesday 14 May 2025 02:33:34 +0000 (0:00:00.926) 0:09:51.314 ********* 2025-05-14 02:37:03.049118 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049122 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049126 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049130 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049134 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049139 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049143 | orchestrator | 2025-05-14 02:37:03.049147 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:37:03.049151 | orchestrator | Wednesday 14 May 2025 02:33:34 +0000 (0:00:00.683) 0:09:51.997 ********* 2025-05-14 02:37:03.049155 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049160 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049164 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049172 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049176 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049180 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049184 | orchestrator | 2025-05-14 02:37:03.049188 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:37:03.049193 | orchestrator | Wednesday 14 May 2025 02:33:35 +0000 (0:00:00.755) 0:09:52.753 ********* 2025-05-14 02:37:03.049197 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049201 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049205 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049209 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049214 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049218 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049222 | orchestrator | 2025-05-14 02:37:03.049226 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:37:03.049230 | orchestrator | Wednesday 14 May 2025 02:33:36 +0000 (0:00:00.569) 0:09:53.322 ********* 2025-05-14 02:37:03.049235 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049239 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049243 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049247 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049251 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049255 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049260 | orchestrator | 2025-05-14 02:37:03.049264 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:37:03.049268 | orchestrator | Wednesday 14 May 2025 02:33:37 +0000 (0:00:00.702) 0:09:54.024 ********* 2025-05-14 02:37:03.049272 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049276 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049280 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049284 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049288 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049292 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049297 | orchestrator | 2025-05-14 02:37:03.049301 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:37:03.049305 | orchestrator | Wednesday 14 May 2025 02:33:37 +0000 (0:00:00.581) 0:09:54.605 ********* 2025-05-14 02:37:03.049309 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049313 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049317 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049321 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049325 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049329 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049333 | orchestrator | 2025-05-14 02:37:03.049341 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:37:03.049345 | orchestrator | Wednesday 14 May 2025 02:33:38 +0000 (0:00:00.902) 0:09:55.508 ********* 2025-05-14 02:37:03.049349 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049353 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049357 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049361 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049365 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049370 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049374 | orchestrator | 2025-05-14 02:37:03.049378 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:37:03.049382 | orchestrator | Wednesday 14 May 2025 02:33:39 +0000 (0:00:00.680) 0:09:56.189 ********* 2025-05-14 02:37:03.049386 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049391 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049395 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049399 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049403 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049407 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049412 | orchestrator | 2025-05-14 02:37:03.049416 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:37:03.049420 | orchestrator | Wednesday 14 May 2025 02:33:40 +0000 (0:00:00.905) 0:09:57.094 ********* 2025-05-14 02:37:03.049424 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049429 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049433 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049450 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049455 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049459 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049463 | orchestrator | 2025-05-14 02:37:03.049467 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:37:03.049472 | orchestrator | Wednesday 14 May 2025 02:33:40 +0000 (0:00:00.722) 0:09:57.816 ********* 2025-05-14 02:37:03.049476 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:37:03.049480 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:37:03.049484 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:37:03.049488 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:37:03.049492 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049496 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:37:03.049500 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049504 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:37:03.049508 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.049512 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.049516 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049520 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.049524 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.049528 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049533 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049537 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.049541 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.049545 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049549 | orchestrator | 2025-05-14 02:37:03.049553 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:37:03.049557 | orchestrator | Wednesday 14 May 2025 02:33:41 +0000 (0:00:01.010) 0:09:58.827 ********* 2025-05-14 02:37:03.049564 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 02:37:03.049568 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 02:37:03.049572 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049580 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 02:37:03.049584 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 02:37:03.049588 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049607 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 02:37:03.049612 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 02:37:03.049616 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049620 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 02:37:03.049624 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 02:37:03.049628 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 02:37:03.049632 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 02:37:03.049636 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049641 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049645 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 02:37:03.049649 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 02:37:03.049653 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049657 | orchestrator | 2025-05-14 02:37:03.049661 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:37:03.049665 | orchestrator | Wednesday 14 May 2025 02:33:42 +0000 (0:00:00.778) 0:09:59.606 ********* 2025-05-14 02:37:03.049669 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049673 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049677 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049681 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049685 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049689 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049693 | orchestrator | 2025-05-14 02:37:03.049697 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:37:03.049701 | orchestrator | Wednesday 14 May 2025 02:33:43 +0000 (0:00:00.985) 0:10:00.591 ********* 2025-05-14 02:37:03.049705 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049709 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049713 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049718 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049722 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049726 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049730 | orchestrator | 2025-05-14 02:37:03.049734 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:37:03.049738 | orchestrator | Wednesday 14 May 2025 02:33:44 +0000 (0:00:00.988) 0:10:01.579 ********* 2025-05-14 02:37:03.049742 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049746 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049750 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049754 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049758 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049763 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049767 | orchestrator | 2025-05-14 02:37:03.049771 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:37:03.049775 | orchestrator | Wednesday 14 May 2025 02:33:45 +0000 (0:00:00.929) 0:10:02.509 ********* 2025-05-14 02:37:03.049779 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049783 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049787 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049791 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049795 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049799 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049803 | orchestrator | 2025-05-14 02:37:03.049807 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:37:03.049815 | orchestrator | Wednesday 14 May 2025 02:33:46 +0000 (0:00:00.783) 0:10:03.293 ********* 2025-05-14 02:37:03.049833 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049838 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049842 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049846 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049850 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049854 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049858 | orchestrator | 2025-05-14 02:37:03.049862 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:37:03.049866 | orchestrator | Wednesday 14 May 2025 02:33:47 +0000 (0:00:01.098) 0:10:04.391 ********* 2025-05-14 02:37:03.049870 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049874 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.049878 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.049882 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.049887 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.049891 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.049895 | orchestrator | 2025-05-14 02:37:03.049899 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:37:03.049903 | orchestrator | Wednesday 14 May 2025 02:33:48 +0000 (0:00:00.741) 0:10:05.133 ********* 2025-05-14 02:37:03.049907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.049911 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.049915 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.049919 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049923 | orchestrator | 2025-05-14 02:37:03.049927 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:37:03.049932 | orchestrator | Wednesday 14 May 2025 02:33:48 +0000 (0:00:00.420) 0:10:05.554 ********* 2025-05-14 02:37:03.049936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.049943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.049947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.049951 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049955 | orchestrator | 2025-05-14 02:37:03.049959 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:37:03.049963 | orchestrator | Wednesday 14 May 2025 02:33:48 +0000 (0:00:00.430) 0:10:05.984 ********* 2025-05-14 02:37:03.049967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.049971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.049976 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.049980 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.049984 | orchestrator | 2025-05-14 02:37:03.049988 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.049992 | orchestrator | Wednesday 14 May 2025 02:33:49 +0000 (0:00:00.733) 0:10:06.718 ********* 2025-05-14 02:37:03.049996 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050000 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050004 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050008 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050044 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050049 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050053 | orchestrator | 2025-05-14 02:37:03.050058 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:37:03.050062 | orchestrator | Wednesday 14 May 2025 02:33:50 +0000 (0:00:01.074) 0:10:07.792 ********* 2025-05-14 02:37:03.050066 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:37:03.050070 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050074 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:37:03.050078 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050086 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:37:03.050090 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050094 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.050098 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050102 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.050106 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050111 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.050115 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050119 | orchestrator | 2025-05-14 02:37:03.050123 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:37:03.050127 | orchestrator | Wednesday 14 May 2025 02:33:52 +0000 (0:00:01.270) 0:10:09.063 ********* 2025-05-14 02:37:03.050131 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050135 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050139 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050143 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050147 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050151 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050155 | orchestrator | 2025-05-14 02:37:03.050160 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.050164 | orchestrator | Wednesday 14 May 2025 02:33:53 +0000 (0:00:01.021) 0:10:10.084 ********* 2025-05-14 02:37:03.050168 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050172 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050176 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050180 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050184 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050188 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050192 | orchestrator | 2025-05-14 02:37:03.050196 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:37:03.050200 | orchestrator | Wednesday 14 May 2025 02:33:53 +0000 (0:00:00.754) 0:10:10.838 ********* 2025-05-14 02:37:03.050205 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:37:03.050209 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050213 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:37:03.050217 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050221 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:37:03.050225 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050244 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.050249 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050253 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.050257 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050261 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.050265 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050269 | orchestrator | 2025-05-14 02:37:03.050273 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:37:03.050278 | orchestrator | Wednesday 14 May 2025 02:33:55 +0000 (0:00:01.220) 0:10:12.059 ********* 2025-05-14 02:37:03.050282 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050286 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050290 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050294 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.050298 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050303 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.050307 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050311 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.050318 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050322 | orchestrator | 2025-05-14 02:37:03.050326 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:37:03.050331 | orchestrator | Wednesday 14 May 2025 02:33:55 +0000 (0:00:00.733) 0:10:12.792 ********* 2025-05-14 02:37:03.050338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:37:03.050342 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:37:03.050346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:37:03.050351 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050355 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:37:03.050359 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:37:03.050363 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:37:03.050367 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050371 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:37:03.050375 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:37:03.050379 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:37:03.050383 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.050391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.050396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.050400 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050404 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:37:03.050408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:37:03.050412 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:37:03.050416 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050420 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:37:03.050424 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:37:03.050428 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:37:03.050433 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050437 | orchestrator | 2025-05-14 02:37:03.050441 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:37:03.050445 | orchestrator | Wednesday 14 May 2025 02:33:57 +0000 (0:00:01.518) 0:10:14.311 ********* 2025-05-14 02:37:03.050449 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050453 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050457 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050461 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050465 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050470 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050474 | orchestrator | 2025-05-14 02:37:03.050478 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:37:03.050482 | orchestrator | Wednesday 14 May 2025 02:33:58 +0000 (0:00:01.568) 0:10:15.879 ********* 2025-05-14 02:37:03.050486 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050490 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050494 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050498 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.050502 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050506 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:37:03.050510 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050515 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:37:03.050519 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050523 | orchestrator | 2025-05-14 02:37:03.050527 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:37:03.050535 | orchestrator | Wednesday 14 May 2025 02:34:00 +0000 (0:00:01.474) 0:10:17.353 ********* 2025-05-14 02:37:03.050539 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050543 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050547 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050551 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050555 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050559 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050564 | orchestrator | 2025-05-14 02:37:03.050568 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:37:03.050574 | orchestrator | Wednesday 14 May 2025 02:34:01 +0000 (0:00:01.457) 0:10:18.811 ********* 2025-05-14 02:37:03.050578 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:03.050582 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:03.050586 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:03.050590 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.050630 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.050635 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.050639 | orchestrator | 2025-05-14 02:37:03.050643 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-14 02:37:03.050647 | orchestrator | Wednesday 14 May 2025 02:34:03 +0000 (0:00:01.477) 0:10:20.288 ********* 2025-05-14 02:37:03.050652 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.050656 | orchestrator | 2025-05-14 02:37:03.050660 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-14 02:37:03.050664 | orchestrator | Wednesday 14 May 2025 02:34:06 +0000 (0:00:03.510) 0:10:23.799 ********* 2025-05-14 02:37:03.050668 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.050673 | orchestrator | 2025-05-14 02:37:03.050677 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-14 02:37:03.050681 | orchestrator | Wednesday 14 May 2025 02:34:08 +0000 (0:00:01.973) 0:10:25.772 ********* 2025-05-14 02:37:03.050685 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.050689 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.050693 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.050698 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.050702 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.050706 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.050710 | orchestrator | 2025-05-14 02:37:03.050714 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-14 02:37:03.050718 | orchestrator | Wednesday 14 May 2025 02:34:10 +0000 (0:00:01.762) 0:10:27.535 ********* 2025-05-14 02:37:03.050723 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.050727 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.050736 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.050740 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.050744 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.050748 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.050752 | orchestrator | 2025-05-14 02:37:03.050757 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-14 02:37:03.050761 | orchestrator | Wednesday 14 May 2025 02:34:11 +0000 (0:00:01.316) 0:10:28.851 ********* 2025-05-14 02:37:03.050765 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.050771 | orchestrator | 2025-05-14 02:37:03.050775 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-14 02:37:03.050779 | orchestrator | Wednesday 14 May 2025 02:34:13 +0000 (0:00:01.376) 0:10:30.228 ********* 2025-05-14 02:37:03.050783 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.050787 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.050792 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.050796 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.050804 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.050808 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.050813 | orchestrator | 2025-05-14 02:37:03.050817 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-14 02:37:03.050821 | orchestrator | Wednesday 14 May 2025 02:34:15 +0000 (0:00:01.962) 0:10:32.191 ********* 2025-05-14 02:37:03.050825 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.050829 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.050833 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.050837 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.050841 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.050845 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.050849 | orchestrator | 2025-05-14 02:37:03.050853 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-14 02:37:03.050857 | orchestrator | Wednesday 14 May 2025 02:34:19 +0000 (0:00:04.486) 0:10:36.677 ********* 2025-05-14 02:37:03.050862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.050866 | orchestrator | 2025-05-14 02:37:03.050870 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-14 02:37:03.050874 | orchestrator | Wednesday 14 May 2025 02:34:21 +0000 (0:00:01.426) 0:10:38.103 ********* 2025-05-14 02:37:03.050878 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.050882 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.050886 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.050890 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.050894 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.050899 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.050903 | orchestrator | 2025-05-14 02:37:03.050907 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-14 02:37:03.050911 | orchestrator | Wednesday 14 May 2025 02:34:21 +0000 (0:00:00.693) 0:10:38.797 ********* 2025-05-14 02:37:03.050915 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:03.050919 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.050923 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:03.050927 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:03.050931 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.050936 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.050940 | orchestrator | 2025-05-14 02:37:03.050944 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-14 02:37:03.050948 | orchestrator | Wednesday 14 May 2025 02:34:24 +0000 (0:00:02.647) 0:10:41.444 ********* 2025-05-14 02:37:03.050952 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:03.050957 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:03.050961 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:03.050965 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.050969 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.050973 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.050977 | orchestrator | 2025-05-14 02:37:03.050981 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-14 02:37:03.050986 | orchestrator | 2025-05-14 02:37:03.050990 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:37:03.050999 | orchestrator | Wednesday 14 May 2025 02:34:27 +0000 (0:00:02.818) 0:10:44.262 ********* 2025-05-14 02:37:03.051004 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.051008 | orchestrator | 2025-05-14 02:37:03.051012 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:37:03.051016 | orchestrator | Wednesday 14 May 2025 02:34:28 +0000 (0:00:00.777) 0:10:45.040 ********* 2025-05-14 02:37:03.051021 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051025 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051029 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051036 | orchestrator | 2025-05-14 02:37:03.051040 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:37:03.051045 | orchestrator | Wednesday 14 May 2025 02:34:28 +0000 (0:00:00.329) 0:10:45.369 ********* 2025-05-14 02:37:03.051049 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.051053 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.051057 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.051061 | orchestrator | 2025-05-14 02:37:03.051066 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:37:03.051070 | orchestrator | Wednesday 14 May 2025 02:34:29 +0000 (0:00:00.739) 0:10:46.109 ********* 2025-05-14 02:37:03.051074 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.051078 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.051082 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.051086 | orchestrator | 2025-05-14 02:37:03.051090 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:37:03.051094 | orchestrator | Wednesday 14 May 2025 02:34:29 +0000 (0:00:00.691) 0:10:46.801 ********* 2025-05-14 02:37:03.051097 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.051101 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.051105 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.051109 | orchestrator | 2025-05-14 02:37:03.051115 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:37:03.051119 | orchestrator | Wednesday 14 May 2025 02:34:30 +0000 (0:00:01.019) 0:10:47.820 ********* 2025-05-14 02:37:03.051123 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051127 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051131 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051134 | orchestrator | 2025-05-14 02:37:03.051138 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:37:03.051142 | orchestrator | Wednesday 14 May 2025 02:34:31 +0000 (0:00:00.321) 0:10:48.141 ********* 2025-05-14 02:37:03.051146 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051149 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051153 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051157 | orchestrator | 2025-05-14 02:37:03.051161 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:37:03.051165 | orchestrator | Wednesday 14 May 2025 02:34:31 +0000 (0:00:00.343) 0:10:48.485 ********* 2025-05-14 02:37:03.051168 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051172 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051176 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051180 | orchestrator | 2025-05-14 02:37:03.051183 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:37:03.051187 | orchestrator | Wednesday 14 May 2025 02:34:31 +0000 (0:00:00.308) 0:10:48.794 ********* 2025-05-14 02:37:03.051191 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051195 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051198 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051202 | orchestrator | 2025-05-14 02:37:03.051206 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:37:03.051209 | orchestrator | Wednesday 14 May 2025 02:34:32 +0000 (0:00:00.607) 0:10:49.401 ********* 2025-05-14 02:37:03.051213 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051217 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051220 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051224 | orchestrator | 2025-05-14 02:37:03.051228 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:37:03.051232 | orchestrator | Wednesday 14 May 2025 02:34:32 +0000 (0:00:00.323) 0:10:49.725 ********* 2025-05-14 02:37:03.051235 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051239 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051243 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051247 | orchestrator | 2025-05-14 02:37:03.051250 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:37:03.051257 | orchestrator | Wednesday 14 May 2025 02:34:33 +0000 (0:00:00.318) 0:10:50.044 ********* 2025-05-14 02:37:03.051261 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.051265 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.051268 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.051272 | orchestrator | 2025-05-14 02:37:03.051276 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:37:03.051280 | orchestrator | Wednesday 14 May 2025 02:34:33 +0000 (0:00:00.747) 0:10:50.792 ********* 2025-05-14 02:37:03.051284 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051287 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051291 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051295 | orchestrator | 2025-05-14 02:37:03.051299 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:37:03.051303 | orchestrator | Wednesday 14 May 2025 02:34:34 +0000 (0:00:00.597) 0:10:51.389 ********* 2025-05-14 02:37:03.051306 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051310 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051314 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051318 | orchestrator | 2025-05-14 02:37:03.051321 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:37:03.051325 | orchestrator | Wednesday 14 May 2025 02:34:34 +0000 (0:00:00.320) 0:10:51.710 ********* 2025-05-14 02:37:03.051329 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.051333 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.051337 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.051341 | orchestrator | 2025-05-14 02:37:03.051344 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:37:03.051351 | orchestrator | Wednesday 14 May 2025 02:34:35 +0000 (0:00:00.370) 0:10:52.081 ********* 2025-05-14 02:37:03.051355 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.051358 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.051362 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.051366 | orchestrator | 2025-05-14 02:37:03.051370 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:37:03.051374 | orchestrator | Wednesday 14 May 2025 02:34:35 +0000 (0:00:00.312) 0:10:52.393 ********* 2025-05-14 02:37:03.051378 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.051382 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.051385 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.051389 | orchestrator | 2025-05-14 02:37:03.051393 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:37:03.051397 | orchestrator | Wednesday 14 May 2025 02:34:35 +0000 (0:00:00.490) 0:10:52.884 ********* 2025-05-14 02:37:03.051401 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051404 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051408 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051412 | orchestrator | 2025-05-14 02:37:03.051416 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:37:03.051420 | orchestrator | Wednesday 14 May 2025 02:34:36 +0000 (0:00:00.272) 0:10:53.156 ********* 2025-05-14 02:37:03.051423 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051427 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051431 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051435 | orchestrator | 2025-05-14 02:37:03.051439 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:37:03.051443 | orchestrator | Wednesday 14 May 2025 02:34:36 +0000 (0:00:00.287) 0:10:53.444 ********* 2025-05-14 02:37:03.051446 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051450 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051454 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051458 | orchestrator | 2025-05-14 02:37:03.051464 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:37:03.051468 | orchestrator | Wednesday 14 May 2025 02:34:36 +0000 (0:00:00.273) 0:10:53.718 ********* 2025-05-14 02:37:03.051475 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.051479 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.051483 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.051486 | orchestrator | 2025-05-14 02:37:03.051490 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:37:03.051494 | orchestrator | Wednesday 14 May 2025 02:34:37 +0000 (0:00:00.496) 0:10:54.214 ********* 2025-05-14 02:37:03.051498 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051502 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051505 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051509 | orchestrator | 2025-05-14 02:37:03.051513 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:37:03.051517 | orchestrator | Wednesday 14 May 2025 02:34:37 +0000 (0:00:00.273) 0:10:54.488 ********* 2025-05-14 02:37:03.051520 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051524 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051528 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051531 | orchestrator | 2025-05-14 02:37:03.051535 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:37:03.051539 | orchestrator | Wednesday 14 May 2025 02:34:37 +0000 (0:00:00.329) 0:10:54.818 ********* 2025-05-14 02:37:03.051543 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051546 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051550 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051554 | orchestrator | 2025-05-14 02:37:03.051558 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:37:03.051561 | orchestrator | Wednesday 14 May 2025 02:34:38 +0000 (0:00:00.328) 0:10:55.146 ********* 2025-05-14 02:37:03.051565 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051569 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051572 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051576 | orchestrator | 2025-05-14 02:37:03.051580 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:37:03.051584 | orchestrator | Wednesday 14 May 2025 02:34:38 +0000 (0:00:00.475) 0:10:55.621 ********* 2025-05-14 02:37:03.051587 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051601 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051606 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051609 | orchestrator | 2025-05-14 02:37:03.051613 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:37:03.051617 | orchestrator | Wednesday 14 May 2025 02:34:38 +0000 (0:00:00.327) 0:10:55.949 ********* 2025-05-14 02:37:03.051621 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051625 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051629 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051633 | orchestrator | 2025-05-14 02:37:03.051636 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:37:03.051640 | orchestrator | Wednesday 14 May 2025 02:34:39 +0000 (0:00:00.348) 0:10:56.298 ********* 2025-05-14 02:37:03.051644 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051648 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051652 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051655 | orchestrator | 2025-05-14 02:37:03.051659 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:37:03.051663 | orchestrator | Wednesday 14 May 2025 02:34:39 +0000 (0:00:00.323) 0:10:56.622 ********* 2025-05-14 02:37:03.051667 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051671 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051675 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051678 | orchestrator | 2025-05-14 02:37:03.051682 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:37:03.051686 | orchestrator | Wednesday 14 May 2025 02:34:40 +0000 (0:00:00.591) 0:10:57.213 ********* 2025-05-14 02:37:03.051696 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051700 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051704 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051708 | orchestrator | 2025-05-14 02:37:03.051714 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:37:03.051718 | orchestrator | Wednesday 14 May 2025 02:34:40 +0000 (0:00:00.355) 0:10:57.568 ********* 2025-05-14 02:37:03.051722 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051726 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051729 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051733 | orchestrator | 2025-05-14 02:37:03.051737 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:37:03.051741 | orchestrator | Wednesday 14 May 2025 02:34:40 +0000 (0:00:00.372) 0:10:57.941 ********* 2025-05-14 02:37:03.051745 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051748 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051752 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051756 | orchestrator | 2025-05-14 02:37:03.051760 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:37:03.051764 | orchestrator | Wednesday 14 May 2025 02:34:41 +0000 (0:00:00.369) 0:10:58.310 ********* 2025-05-14 02:37:03.051768 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051772 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051775 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051779 | orchestrator | 2025-05-14 02:37:03.051783 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:37:03.051787 | orchestrator | Wednesday 14 May 2025 02:34:41 +0000 (0:00:00.605) 0:10:58.916 ********* 2025-05-14 02:37:03.051791 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.051794 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.051798 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051802 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.051806 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.051812 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051816 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.051820 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.051824 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051828 | orchestrator | 2025-05-14 02:37:03.051832 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:37:03.051836 | orchestrator | Wednesday 14 May 2025 02:34:42 +0000 (0:00:00.385) 0:10:59.301 ********* 2025-05-14 02:37:03.051839 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 02:37:03.051843 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 02:37:03.051847 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051851 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 02:37:03.051854 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 02:37:03.051858 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051862 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 02:37:03.051866 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 02:37:03.051870 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051873 | orchestrator | 2025-05-14 02:37:03.051877 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:37:03.051881 | orchestrator | Wednesday 14 May 2025 02:34:42 +0000 (0:00:00.398) 0:10:59.700 ********* 2025-05-14 02:37:03.051885 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051888 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051892 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051896 | orchestrator | 2025-05-14 02:37:03.051902 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:37:03.051906 | orchestrator | Wednesday 14 May 2025 02:34:43 +0000 (0:00:00.324) 0:11:00.024 ********* 2025-05-14 02:37:03.051910 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051914 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051917 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051921 | orchestrator | 2025-05-14 02:37:03.051925 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:37:03.051929 | orchestrator | Wednesday 14 May 2025 02:34:43 +0000 (0:00:00.591) 0:11:00.616 ********* 2025-05-14 02:37:03.051933 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051936 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051940 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051944 | orchestrator | 2025-05-14 02:37:03.051948 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:37:03.051952 | orchestrator | Wednesday 14 May 2025 02:34:43 +0000 (0:00:00.350) 0:11:00.966 ********* 2025-05-14 02:37:03.051955 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051959 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051963 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051967 | orchestrator | 2025-05-14 02:37:03.051971 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:37:03.051974 | orchestrator | Wednesday 14 May 2025 02:34:44 +0000 (0:00:00.389) 0:11:01.356 ********* 2025-05-14 02:37:03.051978 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.051982 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.051986 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.051990 | orchestrator | 2025-05-14 02:37:03.051994 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:37:03.051997 | orchestrator | Wednesday 14 May 2025 02:34:44 +0000 (0:00:00.422) 0:11:01.778 ********* 2025-05-14 02:37:03.052001 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052005 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052009 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052013 | orchestrator | 2025-05-14 02:37:03.052017 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:37:03.052020 | orchestrator | Wednesday 14 May 2025 02:34:45 +0000 (0:00:00.659) 0:11:02.438 ********* 2025-05-14 02:37:03.052024 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.052030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.052034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.052038 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052042 | orchestrator | 2025-05-14 02:37:03.052046 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:37:03.052050 | orchestrator | Wednesday 14 May 2025 02:34:45 +0000 (0:00:00.486) 0:11:02.924 ********* 2025-05-14 02:37:03.052054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.052058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.052062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.052065 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052069 | orchestrator | 2025-05-14 02:37:03.052073 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:37:03.052077 | orchestrator | Wednesday 14 May 2025 02:34:46 +0000 (0:00:00.539) 0:11:03.463 ********* 2025-05-14 02:37:03.052081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.052084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.052088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.052092 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052096 | orchestrator | 2025-05-14 02:37:03.052103 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.052106 | orchestrator | Wednesday 14 May 2025 02:34:46 +0000 (0:00:00.467) 0:11:03.931 ********* 2025-05-14 02:37:03.052110 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052114 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052118 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052121 | orchestrator | 2025-05-14 02:37:03.052125 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:37:03.052132 | orchestrator | Wednesday 14 May 2025 02:34:47 +0000 (0:00:00.597) 0:11:04.529 ********* 2025-05-14 02:37:03.052136 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.052140 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052143 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.052147 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052151 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.052155 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052159 | orchestrator | 2025-05-14 02:37:03.052162 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:37:03.052166 | orchestrator | Wednesday 14 May 2025 02:34:48 +0000 (0:00:00.632) 0:11:05.161 ********* 2025-05-14 02:37:03.052170 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052174 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052178 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052181 | orchestrator | 2025-05-14 02:37:03.052185 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.052189 | orchestrator | Wednesday 14 May 2025 02:34:48 +0000 (0:00:00.712) 0:11:05.874 ********* 2025-05-14 02:37:03.052192 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052196 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052200 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052204 | orchestrator | 2025-05-14 02:37:03.052207 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:37:03.052211 | orchestrator | Wednesday 14 May 2025 02:34:49 +0000 (0:00:00.326) 0:11:06.200 ********* 2025-05-14 02:37:03.052215 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.052218 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052222 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.052226 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052229 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.052233 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052237 | orchestrator | 2025-05-14 02:37:03.052241 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:37:03.052244 | orchestrator | Wednesday 14 May 2025 02:34:49 +0000 (0:00:00.508) 0:11:06.708 ********* 2025-05-14 02:37:03.052248 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.052252 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052256 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.052260 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052263 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.052267 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052271 | orchestrator | 2025-05-14 02:37:03.052275 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:37:03.052279 | orchestrator | Wednesday 14 May 2025 02:34:50 +0000 (0:00:00.349) 0:11:07.058 ********* 2025-05-14 02:37:03.052282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.052286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.052293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.052297 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052301 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:37:03.052305 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:37:03.052309 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:37:03.052313 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052317 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:37:03.052320 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:37:03.052327 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:37:03.052331 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052335 | orchestrator | 2025-05-14 02:37:03.052338 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:37:03.052342 | orchestrator | Wednesday 14 May 2025 02:34:51 +0000 (0:00:00.986) 0:11:08.044 ********* 2025-05-14 02:37:03.052346 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052350 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052354 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052358 | orchestrator | 2025-05-14 02:37:03.052362 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:37:03.052365 | orchestrator | Wednesday 14 May 2025 02:34:51 +0000 (0:00:00.568) 0:11:08.613 ********* 2025-05-14 02:37:03.052369 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.052373 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052377 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:37:03.052381 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052384 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:37:03.052388 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052392 | orchestrator | 2025-05-14 02:37:03.052396 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:37:03.052400 | orchestrator | Wednesday 14 May 2025 02:34:52 +0000 (0:00:00.885) 0:11:09.499 ********* 2025-05-14 02:37:03.052403 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052407 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052411 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052415 | orchestrator | 2025-05-14 02:37:03.052419 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:37:03.052423 | orchestrator | Wednesday 14 May 2025 02:34:53 +0000 (0:00:00.565) 0:11:10.064 ********* 2025-05-14 02:37:03.052426 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052433 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052436 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052440 | orchestrator | 2025-05-14 02:37:03.052444 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-14 02:37:03.052448 | orchestrator | Wednesday 14 May 2025 02:34:53 +0000 (0:00:00.875) 0:11:10.940 ********* 2025-05-14 02:37:03.052452 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052456 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052459 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-14 02:37:03.052463 | orchestrator | 2025-05-14 02:37:03.052467 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-14 02:37:03.052471 | orchestrator | Wednesday 14 May 2025 02:34:54 +0000 (0:00:00.462) 0:11:11.402 ********* 2025-05-14 02:37:03.052475 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:37:03.052478 | orchestrator | 2025-05-14 02:37:03.052482 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-14 02:37:03.052486 | orchestrator | Wednesday 14 May 2025 02:34:56 +0000 (0:00:01.872) 0:11:13.275 ********* 2025-05-14 02:37:03.052491 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-14 02:37:03.052500 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052504 | orchestrator | 2025-05-14 02:37:03.052508 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-14 02:37:03.052512 | orchestrator | Wednesday 14 May 2025 02:34:56 +0000 (0:00:00.487) 0:11:13.763 ********* 2025-05-14 02:37:03.052517 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:37:03.052526 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:37:03.052530 | orchestrator | 2025-05-14 02:37:03.052534 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-14 02:37:03.052537 | orchestrator | Wednesday 14 May 2025 02:35:03 +0000 (0:00:07.072) 0:11:20.836 ********* 2025-05-14 02:37:03.052541 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:37:03.052545 | orchestrator | 2025-05-14 02:37:03.052549 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-14 02:37:03.052553 | orchestrator | Wednesday 14 May 2025 02:35:06 +0000 (0:00:02.964) 0:11:23.800 ********* 2025-05-14 02:37:03.052557 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.052560 | orchestrator | 2025-05-14 02:37:03.052564 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-14 02:37:03.052568 | orchestrator | Wednesday 14 May 2025 02:35:07 +0000 (0:00:00.638) 0:11:24.439 ********* 2025-05-14 02:37:03.052572 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-14 02:37:03.052576 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-14 02:37:03.052579 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-14 02:37:03.052583 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-14 02:37:03.052589 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-14 02:37:03.052602 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-14 02:37:03.052606 | orchestrator | 2025-05-14 02:37:03.052610 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-14 02:37:03.052614 | orchestrator | Wednesday 14 May 2025 02:35:08 +0000 (0:00:01.539) 0:11:25.979 ********* 2025-05-14 02:37:03.052618 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:37:03.052621 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.052625 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 02:37:03.052629 | orchestrator | 2025-05-14 02:37:03.052633 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-14 02:37:03.052637 | orchestrator | Wednesday 14 May 2025 02:35:10 +0000 (0:00:01.886) 0:11:27.866 ********* 2025-05-14 02:37:03.052641 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:37:03.052644 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.052648 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.052652 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:37:03.052656 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:37:03.052660 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:37:03.052663 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.052672 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:37:03.052676 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.052680 | orchestrator | 2025-05-14 02:37:03.052683 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-14 02:37:03.052687 | orchestrator | Wednesday 14 May 2025 02:35:12 +0000 (0:00:01.296) 0:11:29.162 ********* 2025-05-14 02:37:03.052691 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052698 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.052702 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.052705 | orchestrator | 2025-05-14 02:37:03.052709 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-14 02:37:03.052713 | orchestrator | Wednesday 14 May 2025 02:35:12 +0000 (0:00:00.614) 0:11:29.777 ********* 2025-05-14 02:37:03.052717 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.052721 | orchestrator | 2025-05-14 02:37:03.052725 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-14 02:37:03.052729 | orchestrator | Wednesday 14 May 2025 02:35:13 +0000 (0:00:00.603) 0:11:30.381 ********* 2025-05-14 02:37:03.052732 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.052736 | orchestrator | 2025-05-14 02:37:03.052740 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-14 02:37:03.052744 | orchestrator | Wednesday 14 May 2025 02:35:14 +0000 (0:00:00.777) 0:11:31.159 ********* 2025-05-14 02:37:03.052748 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.052752 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.052755 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.052759 | orchestrator | 2025-05-14 02:37:03.052763 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-14 02:37:03.052767 | orchestrator | Wednesday 14 May 2025 02:35:15 +0000 (0:00:01.281) 0:11:32.440 ********* 2025-05-14 02:37:03.052771 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.052774 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.052778 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.052782 | orchestrator | 2025-05-14 02:37:03.052786 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-14 02:37:03.052790 | orchestrator | Wednesday 14 May 2025 02:35:16 +0000 (0:00:01.110) 0:11:33.551 ********* 2025-05-14 02:37:03.052794 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.052797 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.052801 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.052805 | orchestrator | 2025-05-14 02:37:03.052809 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-14 02:37:03.052812 | orchestrator | Wednesday 14 May 2025 02:35:18 +0000 (0:00:01.957) 0:11:35.508 ********* 2025-05-14 02:37:03.052816 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.052820 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.052824 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.052827 | orchestrator | 2025-05-14 02:37:03.052831 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-14 02:37:03.052835 | orchestrator | Wednesday 14 May 2025 02:35:20 +0000 (0:00:02.054) 0:11:37.562 ********* 2025-05-14 02:37:03.052839 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-14 02:37:03.052843 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-14 02:37:03.052846 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-14 02:37:03.052850 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.052854 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.052858 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.052862 | orchestrator | 2025-05-14 02:37:03.052865 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:37:03.052872 | orchestrator | Wednesday 14 May 2025 02:35:37 +0000 (0:00:17.117) 0:11:54.680 ********* 2025-05-14 02:37:03.052876 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.052880 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.052884 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.052887 | orchestrator | 2025-05-14 02:37:03.052891 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-14 02:37:03.052895 | orchestrator | Wednesday 14 May 2025 02:35:38 +0000 (0:00:00.607) 0:11:55.287 ********* 2025-05-14 02:37:03.052899 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.052903 | orchestrator | 2025-05-14 02:37:03.052909 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-14 02:37:03.052913 | orchestrator | Wednesday 14 May 2025 02:35:38 +0000 (0:00:00.682) 0:11:55.970 ********* 2025-05-14 02:37:03.052917 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.052921 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.052925 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.052928 | orchestrator | 2025-05-14 02:37:03.052932 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-14 02:37:03.052936 | orchestrator | Wednesday 14 May 2025 02:35:39 +0000 (0:00:00.364) 0:11:56.334 ********* 2025-05-14 02:37:03.052940 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.052944 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.052947 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.052951 | orchestrator | 2025-05-14 02:37:03.052955 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-14 02:37:03.052959 | orchestrator | Wednesday 14 May 2025 02:35:40 +0000 (0:00:01.210) 0:11:57.545 ********* 2025-05-14 02:37:03.052963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.052967 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.052970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.052974 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.052978 | orchestrator | 2025-05-14 02:37:03.052982 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-14 02:37:03.052986 | orchestrator | Wednesday 14 May 2025 02:35:41 +0000 (0:00:01.136) 0:11:58.681 ********* 2025-05-14 02:37:03.052990 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.052993 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.052997 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.053001 | orchestrator | 2025-05-14 02:37:03.053005 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:37:03.053011 | orchestrator | Wednesday 14 May 2025 02:35:42 +0000 (0:00:00.328) 0:11:59.010 ********* 2025-05-14 02:37:03.053015 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.053018 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.053022 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.053026 | orchestrator | 2025-05-14 02:37:03.053030 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-14 02:37:03.053034 | orchestrator | 2025-05-14 02:37:03.053038 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:37:03.053041 | orchestrator | Wednesday 14 May 2025 02:35:44 +0000 (0:00:02.094) 0:12:01.104 ********* 2025-05-14 02:37:03.053045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.053049 | orchestrator | 2025-05-14 02:37:03.053053 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:37:03.053057 | orchestrator | Wednesday 14 May 2025 02:35:45 +0000 (0:00:00.919) 0:12:02.024 ********* 2025-05-14 02:37:03.053061 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053065 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053068 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053077 | orchestrator | 2025-05-14 02:37:03.053081 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:37:03.053085 | orchestrator | Wednesday 14 May 2025 02:35:45 +0000 (0:00:00.336) 0:12:02.361 ********* 2025-05-14 02:37:03.053089 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.053093 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.053096 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.053100 | orchestrator | 2025-05-14 02:37:03.053104 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:37:03.053108 | orchestrator | Wednesday 14 May 2025 02:35:46 +0000 (0:00:00.716) 0:12:03.077 ********* 2025-05-14 02:37:03.053112 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.053115 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.053119 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.053123 | orchestrator | 2025-05-14 02:37:03.053127 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:37:03.053131 | orchestrator | Wednesday 14 May 2025 02:35:47 +0000 (0:00:00.992) 0:12:04.069 ********* 2025-05-14 02:37:03.053134 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.053138 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.053142 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.053146 | orchestrator | 2025-05-14 02:37:03.053150 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:37:03.053153 | orchestrator | Wednesday 14 May 2025 02:35:47 +0000 (0:00:00.734) 0:12:04.803 ********* 2025-05-14 02:37:03.053157 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053161 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053165 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053169 | orchestrator | 2025-05-14 02:37:03.053173 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:37:03.053176 | orchestrator | Wednesday 14 May 2025 02:35:48 +0000 (0:00:00.375) 0:12:05.179 ********* 2025-05-14 02:37:03.053180 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053184 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053188 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053192 | orchestrator | 2025-05-14 02:37:03.053195 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:37:03.053199 | orchestrator | Wednesday 14 May 2025 02:35:48 +0000 (0:00:00.330) 0:12:05.509 ********* 2025-05-14 02:37:03.053203 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053207 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053211 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053214 | orchestrator | 2025-05-14 02:37:03.053218 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:37:03.053222 | orchestrator | Wednesday 14 May 2025 02:35:49 +0000 (0:00:00.618) 0:12:06.128 ********* 2025-05-14 02:37:03.053226 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053230 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053233 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053237 | orchestrator | 2025-05-14 02:37:03.053241 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:37:03.053248 | orchestrator | Wednesday 14 May 2025 02:35:49 +0000 (0:00:00.339) 0:12:06.468 ********* 2025-05-14 02:37:03.053252 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053256 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053259 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053263 | orchestrator | 2025-05-14 02:37:03.053267 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:37:03.053271 | orchestrator | Wednesday 14 May 2025 02:35:49 +0000 (0:00:00.327) 0:12:06.796 ********* 2025-05-14 02:37:03.053275 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053278 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053282 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053286 | orchestrator | 2025-05-14 02:37:03.053290 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:37:03.053298 | orchestrator | Wednesday 14 May 2025 02:35:50 +0000 (0:00:00.325) 0:12:07.121 ********* 2025-05-14 02:37:03.053302 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.053306 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.053310 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.053314 | orchestrator | 2025-05-14 02:37:03.053317 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:37:03.053321 | orchestrator | Wednesday 14 May 2025 02:35:51 +0000 (0:00:01.065) 0:12:08.187 ********* 2025-05-14 02:37:03.053325 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053329 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053332 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053336 | orchestrator | 2025-05-14 02:37:03.053340 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:37:03.053344 | orchestrator | Wednesday 14 May 2025 02:35:51 +0000 (0:00:00.350) 0:12:08.537 ********* 2025-05-14 02:37:03.053348 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053351 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053355 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053359 | orchestrator | 2025-05-14 02:37:03.053366 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:37:03.053369 | orchestrator | Wednesday 14 May 2025 02:35:51 +0000 (0:00:00.329) 0:12:08.866 ********* 2025-05-14 02:37:03.053373 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.053377 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.053381 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.053385 | orchestrator | 2025-05-14 02:37:03.053389 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:37:03.053392 | orchestrator | Wednesday 14 May 2025 02:35:52 +0000 (0:00:00.340) 0:12:09.207 ********* 2025-05-14 02:37:03.053396 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.053400 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.053404 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.053408 | orchestrator | 2025-05-14 02:37:03.053411 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:37:03.053415 | orchestrator | Wednesday 14 May 2025 02:35:52 +0000 (0:00:00.658) 0:12:09.865 ********* 2025-05-14 02:37:03.053419 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.053423 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.053427 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.053431 | orchestrator | 2025-05-14 02:37:03.053434 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:37:03.053438 | orchestrator | Wednesday 14 May 2025 02:35:53 +0000 (0:00:00.348) 0:12:10.214 ********* 2025-05-14 02:37:03.053442 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053446 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053450 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053453 | orchestrator | 2025-05-14 02:37:03.053457 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:37:03.053461 | orchestrator | Wednesday 14 May 2025 02:35:53 +0000 (0:00:00.334) 0:12:10.549 ********* 2025-05-14 02:37:03.053465 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053469 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053472 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053476 | orchestrator | 2025-05-14 02:37:03.053480 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:37:03.053484 | orchestrator | Wednesday 14 May 2025 02:35:53 +0000 (0:00:00.329) 0:12:10.878 ********* 2025-05-14 02:37:03.053488 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053491 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053495 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053499 | orchestrator | 2025-05-14 02:37:03.053503 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:37:03.053507 | orchestrator | Wednesday 14 May 2025 02:35:54 +0000 (0:00:00.606) 0:12:11.484 ********* 2025-05-14 02:37:03.053514 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.053518 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.053521 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.053525 | orchestrator | 2025-05-14 02:37:03.053529 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:37:03.053533 | orchestrator | Wednesday 14 May 2025 02:35:54 +0000 (0:00:00.345) 0:12:11.830 ********* 2025-05-14 02:37:03.053536 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053540 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053544 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053548 | orchestrator | 2025-05-14 02:37:03.053552 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:37:03.053555 | orchestrator | Wednesday 14 May 2025 02:35:55 +0000 (0:00:00.339) 0:12:12.169 ********* 2025-05-14 02:37:03.053559 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053563 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053567 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053571 | orchestrator | 2025-05-14 02:37:03.053574 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:37:03.053578 | orchestrator | Wednesday 14 May 2025 02:35:55 +0000 (0:00:00.349) 0:12:12.519 ********* 2025-05-14 02:37:03.053582 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053586 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053590 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053614 | orchestrator | 2025-05-14 02:37:03.053618 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:37:03.053625 | orchestrator | Wednesday 14 May 2025 02:35:56 +0000 (0:00:00.619) 0:12:13.138 ********* 2025-05-14 02:37:03.053629 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053633 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053637 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053640 | orchestrator | 2025-05-14 02:37:03.053644 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:37:03.053648 | orchestrator | Wednesday 14 May 2025 02:35:56 +0000 (0:00:00.343) 0:12:13.481 ********* 2025-05-14 02:37:03.053652 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053656 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053660 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053664 | orchestrator | 2025-05-14 02:37:03.053667 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:37:03.053671 | orchestrator | Wednesday 14 May 2025 02:35:56 +0000 (0:00:00.396) 0:12:13.878 ********* 2025-05-14 02:37:03.053675 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053679 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053683 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053687 | orchestrator | 2025-05-14 02:37:03.053690 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:37:03.053694 | orchestrator | Wednesday 14 May 2025 02:35:57 +0000 (0:00:00.323) 0:12:14.202 ********* 2025-05-14 02:37:03.053698 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053702 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053706 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053709 | orchestrator | 2025-05-14 02:37:03.053713 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:37:03.053717 | orchestrator | Wednesday 14 May 2025 02:35:57 +0000 (0:00:00.628) 0:12:14.830 ********* 2025-05-14 02:37:03.053721 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053725 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053729 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053732 | orchestrator | 2025-05-14 02:37:03.053740 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:37:03.053744 | orchestrator | Wednesday 14 May 2025 02:35:58 +0000 (0:00:00.340) 0:12:15.171 ********* 2025-05-14 02:37:03.053751 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053755 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053759 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053762 | orchestrator | 2025-05-14 02:37:03.053766 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:37:03.053770 | orchestrator | Wednesday 14 May 2025 02:35:58 +0000 (0:00:00.353) 0:12:15.525 ********* 2025-05-14 02:37:03.053774 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053778 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053782 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053786 | orchestrator | 2025-05-14 02:37:03.053789 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:37:03.053793 | orchestrator | Wednesday 14 May 2025 02:35:58 +0000 (0:00:00.327) 0:12:15.852 ********* 2025-05-14 02:37:03.053797 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053801 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053805 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053808 | orchestrator | 2025-05-14 02:37:03.053812 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:37:03.053816 | orchestrator | Wednesday 14 May 2025 02:35:59 +0000 (0:00:00.622) 0:12:16.475 ********* 2025-05-14 02:37:03.053820 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053824 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053828 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053831 | orchestrator | 2025-05-14 02:37:03.053835 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:37:03.053839 | orchestrator | Wednesday 14 May 2025 02:35:59 +0000 (0:00:00.331) 0:12:16.807 ********* 2025-05-14 02:37:03.053843 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.053847 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:37:03.053850 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053854 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.053858 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:37:03.053862 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053866 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.053869 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:37:03.053873 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053877 | orchestrator | 2025-05-14 02:37:03.053881 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:37:03.053885 | orchestrator | Wednesday 14 May 2025 02:36:00 +0000 (0:00:00.417) 0:12:17.225 ********* 2025-05-14 02:37:03.053888 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 02:37:03.053892 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 02:37:03.053896 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053900 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 02:37:03.053903 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 02:37:03.053907 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053911 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 02:37:03.053915 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 02:37:03.053919 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053922 | orchestrator | 2025-05-14 02:37:03.053926 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:37:03.053930 | orchestrator | Wednesday 14 May 2025 02:36:00 +0000 (0:00:00.445) 0:12:17.670 ********* 2025-05-14 02:37:03.053934 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053938 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053941 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053945 | orchestrator | 2025-05-14 02:37:03.053952 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:37:03.053958 | orchestrator | Wednesday 14 May 2025 02:36:01 +0000 (0:00:00.652) 0:12:18.323 ********* 2025-05-14 02:37:03.053962 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053966 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053969 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053973 | orchestrator | 2025-05-14 02:37:03.053977 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:37:03.053981 | orchestrator | Wednesday 14 May 2025 02:36:01 +0000 (0:00:00.350) 0:12:18.674 ********* 2025-05-14 02:37:03.053985 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.053989 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.053992 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.053996 | orchestrator | 2025-05-14 02:37:03.054000 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:37:03.054004 | orchestrator | Wednesday 14 May 2025 02:36:02 +0000 (0:00:00.356) 0:12:19.030 ********* 2025-05-14 02:37:03.054008 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054012 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054034 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054037 | orchestrator | 2025-05-14 02:37:03.054041 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:37:03.054045 | orchestrator | Wednesday 14 May 2025 02:36:02 +0000 (0:00:00.360) 0:12:19.391 ********* 2025-05-14 02:37:03.054049 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054053 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054057 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054060 | orchestrator | 2025-05-14 02:37:03.054064 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:37:03.054068 | orchestrator | Wednesday 14 May 2025 02:36:02 +0000 (0:00:00.607) 0:12:19.998 ********* 2025-05-14 02:37:03.054072 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054079 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054082 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054086 | orchestrator | 2025-05-14 02:37:03.054090 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:37:03.054094 | orchestrator | Wednesday 14 May 2025 02:36:03 +0000 (0:00:00.357) 0:12:20.356 ********* 2025-05-14 02:37:03.054098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.054102 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.054105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.054109 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054113 | orchestrator | 2025-05-14 02:37:03.054117 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:37:03.054121 | orchestrator | Wednesday 14 May 2025 02:36:03 +0000 (0:00:00.438) 0:12:20.794 ********* 2025-05-14 02:37:03.054125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.054129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.054132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.054136 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054140 | orchestrator | 2025-05-14 02:37:03.054144 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:37:03.054148 | orchestrator | Wednesday 14 May 2025 02:36:04 +0000 (0:00:00.480) 0:12:21.276 ********* 2025-05-14 02:37:03.054151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.054155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.054159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.054163 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054167 | orchestrator | 2025-05-14 02:37:03.054174 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.054178 | orchestrator | Wednesday 14 May 2025 02:36:04 +0000 (0:00:00.425) 0:12:21.702 ********* 2025-05-14 02:37:03.054182 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054186 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054189 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054193 | orchestrator | 2025-05-14 02:37:03.054197 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:37:03.054201 | orchestrator | Wednesday 14 May 2025 02:36:05 +0000 (0:00:00.354) 0:12:22.056 ********* 2025-05-14 02:37:03.054205 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.054209 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054212 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.054216 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054220 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.054224 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054228 | orchestrator | 2025-05-14 02:37:03.054231 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:37:03.054235 | orchestrator | Wednesday 14 May 2025 02:36:05 +0000 (0:00:00.770) 0:12:22.826 ********* 2025-05-14 02:37:03.054239 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054242 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054246 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054250 | orchestrator | 2025-05-14 02:37:03.054254 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:37:03.054258 | orchestrator | Wednesday 14 May 2025 02:36:06 +0000 (0:00:00.327) 0:12:23.154 ********* 2025-05-14 02:37:03.054261 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054265 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054269 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054273 | orchestrator | 2025-05-14 02:37:03.054277 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:37:03.054280 | orchestrator | Wednesday 14 May 2025 02:36:06 +0000 (0:00:00.344) 0:12:23.498 ********* 2025-05-14 02:37:03.054284 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:37:03.054288 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054292 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:37:03.054298 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054302 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:37:03.054306 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054310 | orchestrator | 2025-05-14 02:37:03.054313 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:37:03.054317 | orchestrator | Wednesday 14 May 2025 02:36:06 +0000 (0:00:00.441) 0:12:23.940 ********* 2025-05-14 02:37:03.054321 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.054325 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054329 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.054333 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054337 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:37:03.054340 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054344 | orchestrator | 2025-05-14 02:37:03.054348 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:37:03.054352 | orchestrator | Wednesday 14 May 2025 02:36:07 +0000 (0:00:00.663) 0:12:24.603 ********* 2025-05-14 02:37:03.054356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.054360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.054367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.054371 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:37:03.054377 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:37:03.054381 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:37:03.054385 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054389 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054392 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:37:03.054396 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:37:03.054400 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:37:03.054404 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054408 | orchestrator | 2025-05-14 02:37:03.054411 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:37:03.054415 | orchestrator | Wednesday 14 May 2025 02:36:08 +0000 (0:00:00.693) 0:12:25.297 ********* 2025-05-14 02:37:03.054419 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054423 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054427 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054431 | orchestrator | 2025-05-14 02:37:03.054435 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:37:03.054438 | orchestrator | Wednesday 14 May 2025 02:36:09 +0000 (0:00:00.864) 0:12:26.161 ********* 2025-05-14 02:37:03.054442 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.054446 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054450 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:37:03.054454 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054458 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:37:03.054461 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054465 | orchestrator | 2025-05-14 02:37:03.054469 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:37:03.054473 | orchestrator | Wednesday 14 May 2025 02:36:09 +0000 (0:00:00.603) 0:12:26.764 ********* 2025-05-14 02:37:03.054477 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054480 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054484 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054488 | orchestrator | 2025-05-14 02:37:03.054492 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:37:03.054496 | orchestrator | Wednesday 14 May 2025 02:36:10 +0000 (0:00:00.857) 0:12:27.622 ********* 2025-05-14 02:37:03.054500 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054503 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054507 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054511 | orchestrator | 2025-05-14 02:37:03.054515 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-14 02:37:03.054519 | orchestrator | Wednesday 14 May 2025 02:36:11 +0000 (0:00:00.552) 0:12:28.174 ********* 2025-05-14 02:37:03.054523 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.054526 | orchestrator | 2025-05-14 02:37:03.054530 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-14 02:37:03.054534 | orchestrator | Wednesday 14 May 2025 02:36:11 +0000 (0:00:00.802) 0:12:28.977 ********* 2025-05-14 02:37:03.054538 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-14 02:37:03.054542 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-14 02:37:03.054546 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-14 02:37:03.054549 | orchestrator | 2025-05-14 02:37:03.054553 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-14 02:37:03.054557 | orchestrator | Wednesday 14 May 2025 02:36:12 +0000 (0:00:00.710) 0:12:29.688 ********* 2025-05-14 02:37:03.054561 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:37:03.054569 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.054572 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 02:37:03.054576 | orchestrator | 2025-05-14 02:37:03.054580 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-14 02:37:03.054584 | orchestrator | Wednesday 14 May 2025 02:36:14 +0000 (0:00:01.894) 0:12:31.583 ********* 2025-05-14 02:37:03.054589 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:37:03.054602 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:37:03.054606 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.054610 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:37:03.054614 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:37:03.054617 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.054621 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:37:03.054625 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:37:03.054629 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.054632 | orchestrator | 2025-05-14 02:37:03.054636 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-14 02:37:03.054640 | orchestrator | Wednesday 14 May 2025 02:36:16 +0000 (0:00:01.609) 0:12:33.193 ********* 2025-05-14 02:37:03.054643 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054647 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054651 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054655 | orchestrator | 2025-05-14 02:37:03.054658 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-14 02:37:03.054662 | orchestrator | Wednesday 14 May 2025 02:36:16 +0000 (0:00:00.364) 0:12:33.558 ********* 2025-05-14 02:37:03.054666 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054670 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054673 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054677 | orchestrator | 2025-05-14 02:37:03.054681 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-14 02:37:03.054685 | orchestrator | Wednesday 14 May 2025 02:36:16 +0000 (0:00:00.351) 0:12:33.909 ********* 2025-05-14 02:37:03.054689 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-14 02:37:03.054693 | orchestrator | 2025-05-14 02:37:03.054699 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-14 02:37:03.054703 | orchestrator | Wednesday 14 May 2025 02:36:17 +0000 (0:00:00.239) 0:12:34.148 ********* 2025-05-14 02:37:03.054707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054726 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054730 | orchestrator | 2025-05-14 02:37:03.054734 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-14 02:37:03.054738 | orchestrator | Wednesday 14 May 2025 02:36:18 +0000 (0:00:00.978) 0:12:35.127 ********* 2025-05-14 02:37:03.054742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054812 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054816 | orchestrator | 2025-05-14 02:37:03.054820 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-14 02:37:03.054824 | orchestrator | Wednesday 14 May 2025 02:36:19 +0000 (0:00:00.896) 0:12:36.023 ********* 2025-05-14 02:37:03.054828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:37:03.054847 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054851 | orchestrator | 2025-05-14 02:37:03.054855 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-14 02:37:03.054859 | orchestrator | Wednesday 14 May 2025 02:36:19 +0000 (0:00:00.667) 0:12:36.691 ********* 2025-05-14 02:37:03.054865 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 02:37:03.054869 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 02:37:03.054873 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 02:37:03.054877 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 02:37:03.054881 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 02:37:03.054885 | orchestrator | 2025-05-14 02:37:03.054889 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-14 02:37:03.054892 | orchestrator | Wednesday 14 May 2025 02:36:43 +0000 (0:00:23.786) 0:13:00.477 ********* 2025-05-14 02:37:03.054896 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054900 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054904 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054908 | orchestrator | 2025-05-14 02:37:03.054911 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-14 02:37:03.054915 | orchestrator | Wednesday 14 May 2025 02:36:43 +0000 (0:00:00.482) 0:13:00.960 ********* 2025-05-14 02:37:03.054922 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.054926 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.054929 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.054933 | orchestrator | 2025-05-14 02:37:03.054937 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-14 02:37:03.054944 | orchestrator | Wednesday 14 May 2025 02:36:44 +0000 (0:00:00.409) 0:13:01.369 ********* 2025-05-14 02:37:03.054948 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.054952 | orchestrator | 2025-05-14 02:37:03.054955 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-14 02:37:03.054959 | orchestrator | Wednesday 14 May 2025 02:36:44 +0000 (0:00:00.558) 0:13:01.928 ********* 2025-05-14 02:37:03.054963 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.054967 | orchestrator | 2025-05-14 02:37:03.054970 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-14 02:37:03.054974 | orchestrator | Wednesday 14 May 2025 02:36:45 +0000 (0:00:00.885) 0:13:02.814 ********* 2025-05-14 02:37:03.054978 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.054981 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.054985 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.054989 | orchestrator | 2025-05-14 02:37:03.054993 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-14 02:37:03.054996 | orchestrator | Wednesday 14 May 2025 02:36:47 +0000 (0:00:01.292) 0:13:04.106 ********* 2025-05-14 02:37:03.055000 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.055004 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.055008 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.055011 | orchestrator | 2025-05-14 02:37:03.055015 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-14 02:37:03.055019 | orchestrator | Wednesday 14 May 2025 02:36:48 +0000 (0:00:01.192) 0:13:05.298 ********* 2025-05-14 02:37:03.055022 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.055026 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.055030 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.055034 | orchestrator | 2025-05-14 02:37:03.055037 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-14 02:37:03.055041 | orchestrator | Wednesday 14 May 2025 02:36:50 +0000 (0:00:02.182) 0:13:07.480 ********* 2025-05-14 02:37:03.055045 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-14 02:37:03.055049 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-14 02:37:03.055053 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-14 02:37:03.055057 | orchestrator | 2025-05-14 02:37:03.055061 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-14 02:37:03.055064 | orchestrator | Wednesday 14 May 2025 02:36:52 +0000 (0:00:01.950) 0:13:09.431 ********* 2025-05-14 02:37:03.055068 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.055072 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:37:03.055076 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:37:03.055080 | orchestrator | 2025-05-14 02:37:03.055084 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:37:03.055087 | orchestrator | Wednesday 14 May 2025 02:36:53 +0000 (0:00:01.227) 0:13:10.658 ********* 2025-05-14 02:37:03.055091 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.055095 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.055099 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.055103 | orchestrator | 2025-05-14 02:37:03.055106 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-14 02:37:03.055110 | orchestrator | Wednesday 14 May 2025 02:36:54 +0000 (0:00:00.731) 0:13:11.390 ********* 2025-05-14 02:37:03.055116 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:37:03.055120 | orchestrator | 2025-05-14 02:37:03.055128 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-14 02:37:03.055132 | orchestrator | Wednesday 14 May 2025 02:36:55 +0000 (0:00:00.795) 0:13:12.186 ********* 2025-05-14 02:37:03.055136 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.055140 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.055144 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.055148 | orchestrator | 2025-05-14 02:37:03.055151 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-14 02:37:03.055155 | orchestrator | Wednesday 14 May 2025 02:36:55 +0000 (0:00:00.346) 0:13:12.532 ********* 2025-05-14 02:37:03.055159 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.055163 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.055167 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.055171 | orchestrator | 2025-05-14 02:37:03.055174 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-14 02:37:03.055178 | orchestrator | Wednesday 14 May 2025 02:36:57 +0000 (0:00:01.564) 0:13:14.097 ********* 2025-05-14 02:37:03.055182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:37:03.055186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:37:03.055190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:37:03.055194 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:37:03.055198 | orchestrator | 2025-05-14 02:37:03.055201 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-14 02:37:03.055205 | orchestrator | Wednesday 14 May 2025 02:36:57 +0000 (0:00:00.674) 0:13:14.771 ********* 2025-05-14 02:37:03.055209 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:37:03.055213 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:37:03.055217 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:37:03.055220 | orchestrator | 2025-05-14 02:37:03.055227 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:37:03.055231 | orchestrator | Wednesday 14 May 2025 02:36:58 +0000 (0:00:00.365) 0:13:15.137 ********* 2025-05-14 02:37:03.055235 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:37:03.055239 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:37:03.055242 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:37:03.055246 | orchestrator | 2025-05-14 02:37:03.055250 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:37:03.055254 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-14 02:37:03.055259 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-14 02:37:03.055262 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-14 02:37:03.055266 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-14 02:37:03.055270 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-14 02:37:03.055274 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-14 02:37:03.055278 | orchestrator | 2025-05-14 02:37:03.055282 | orchestrator | 2025-05-14 02:37:03.055286 | orchestrator | 2025-05-14 02:37:03.055290 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:37:03.055293 | orchestrator | Wednesday 14 May 2025 02:36:59 +0000 (0:00:01.405) 0:13:16.542 ********* 2025-05-14 02:37:03.055297 | orchestrator | =============================================================================== 2025-05-14 02:37:03.055301 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 47.15s 2025-05-14 02:37:03.055309 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 41.65s 2025-05-14 02:37:03.055312 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 23.79s 2025-05-14 02:37:03.055316 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.46s 2025-05-14 02:37:03.055320 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.12s 2025-05-14 02:37:03.055324 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.43s 2025-05-14 02:37:03.055327 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.53s 2025-05-14 02:37:03.055331 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.52s 2025-05-14 02:37:03.055335 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.73s 2025-05-14 02:37:03.055338 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 7.07s 2025-05-14 02:37:03.055342 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.52s 2025-05-14 02:37:03.055346 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.25s 2025-05-14 02:37:03.055350 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.81s 2025-05-14 02:37:03.055353 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 4.81s 2025-05-14 02:37:03.055357 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.50s 2025-05-14 02:37:03.055363 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.49s 2025-05-14 02:37:03.055366 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 3.64s 2025-05-14 02:37:03.055370 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.64s 2025-05-14 02:37:03.055374 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.51s 2025-05-14 02:37:03.055378 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.50s 2025-05-14 02:37:03.055381 | orchestrator | 2025-05-14 02:37:03 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:37:03.055385 | orchestrator | 2025-05-14 02:37:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:06.070442 | orchestrator | 2025-05-14 02:37:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:06.070514 | orchestrator | 2025-05-14 02:37:06 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:06.072862 | orchestrator | 2025-05-14 02:37:06 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:37:06.073080 | orchestrator | 2025-05-14 02:37:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:09.115842 | orchestrator | 2025-05-14 02:37:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:09.116484 | orchestrator | 2025-05-14 02:37:09 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:09.120474 | orchestrator | 2025-05-14 02:37:09 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:37:09.120517 | orchestrator | 2025-05-14 02:37:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:12.176026 | orchestrator | 2025-05-14 02:37:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:12.177992 | orchestrator | 2025-05-14 02:37:12 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:12.178666 | orchestrator | 2025-05-14 02:37:12 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state STARTED 2025-05-14 02:37:12.178707 | orchestrator | 2025-05-14 02:37:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:15.235103 | orchestrator | 2025-05-14 02:37:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:15.243380 | orchestrator | 2025-05-14 02:37:15.243449 | orchestrator | 2025-05-14 02:37:15.243457 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-14 02:37:15.243465 | orchestrator | 2025-05-14 02:37:15.243472 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-14 02:37:15.243479 | orchestrator | Wednesday 14 May 2025 02:33:36 +0000 (0:00:00.125) 0:00:00.125 ********* 2025-05-14 02:37:15.243489 | orchestrator | ok: [localhost] => { 2025-05-14 02:37:15.243498 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-14 02:37:15.243505 | orchestrator | } 2025-05-14 02:37:15.243511 | orchestrator | 2025-05-14 02:37:15.243518 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-14 02:37:15.243524 | orchestrator | Wednesday 14 May 2025 02:33:36 +0000 (0:00:00.041) 0:00:00.167 ********* 2025-05-14 02:37:15.243531 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-14 02:37:15.243539 | orchestrator | ...ignoring 2025-05-14 02:37:15.243546 | orchestrator | 2025-05-14 02:37:15.243553 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-14 02:37:15.243559 | orchestrator | Wednesday 14 May 2025 02:33:39 +0000 (0:00:02.555) 0:00:02.722 ********* 2025-05-14 02:37:15.243566 | orchestrator | skipping: [localhost] 2025-05-14 02:37:15.243573 | orchestrator | 2025-05-14 02:37:15.243579 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-14 02:37:15.243584 | orchestrator | Wednesday 14 May 2025 02:33:39 +0000 (0:00:00.055) 0:00:02.778 ********* 2025-05-14 02:37:15.243614 | orchestrator | ok: [localhost] 2025-05-14 02:37:15.243621 | orchestrator | 2025-05-14 02:37:15.243628 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:37:15.243634 | orchestrator | 2025-05-14 02:37:15.243639 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:37:15.243645 | orchestrator | Wednesday 14 May 2025 02:33:39 +0000 (0:00:00.140) 0:00:02.918 ********* 2025-05-14 02:37:15.243652 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.243658 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:15.243663 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:15.243669 | orchestrator | 2025-05-14 02:37:15.243675 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:37:15.243682 | orchestrator | Wednesday 14 May 2025 02:33:39 +0000 (0:00:00.439) 0:00:03.357 ********* 2025-05-14 02:37:15.243688 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-14 02:37:15.243696 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-14 02:37:15.243760 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-14 02:37:15.243766 | orchestrator | 2025-05-14 02:37:15.243772 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-14 02:37:15.243779 | orchestrator | 2025-05-14 02:37:15.243785 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-14 02:37:15.243791 | orchestrator | Wednesday 14 May 2025 02:33:40 +0000 (0:00:00.409) 0:00:03.766 ********* 2025-05-14 02:37:15.243798 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:37:15.243804 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 02:37:15.243810 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 02:37:15.243816 | orchestrator | 2025-05-14 02:37:15.243821 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 02:37:15.243827 | orchestrator | Wednesday 14 May 2025 02:33:40 +0000 (0:00:00.724) 0:00:04.491 ********* 2025-05-14 02:37:15.243833 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:15.243862 | orchestrator | 2025-05-14 02:37:15.243868 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-14 02:37:15.243875 | orchestrator | Wednesday 14 May 2025 02:33:41 +0000 (0:00:00.747) 0:00:05.238 ********* 2025-05-14 02:37:15.243916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:37:15.243926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:37:15.243935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:37:15.243953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:37:15.243968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:37:15.243976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:37:15.243983 | orchestrator | 2025-05-14 02:37:15.243991 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-14 02:37:15.243998 | orchestrator | Wednesday 14 May 2025 02:33:46 +0000 (0:00:05.120) 0:00:10.359 ********* 2025-05-14 02:37:15.244006 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.244015 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.244026 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.244034 | orchestrator | 2025-05-14 02:37:15.244041 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-14 02:37:15.244049 | orchestrator | Wednesday 14 May 2025 02:33:47 +0000 (0:00:00.890) 0:00:11.250 ********* 2025-05-14 02:37:15.244056 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.244064 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.244071 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.244078 | orchestrator | 2025-05-14 02:37:15.244086 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-14 02:37:15.244094 | orchestrator | Wednesday 14 May 2025 02:33:49 +0000 (0:00:01.692) 0:00:12.943 ********* 2025-05-14 02:37:15.244110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:37:15.244120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:37:15.244136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:37:15.244149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:37:15.244158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:37:15.244166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:37:15.244178 | orchestrator | 2025-05-14 02:37:15.244185 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-14 02:37:15.244193 | orchestrator | Wednesday 14 May 2025 02:33:55 +0000 (0:00:06.716) 0:00:19.660 ********* 2025-05-14 02:37:15.244201 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.244208 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.244215 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.244222 | orchestrator | 2025-05-14 02:37:15.244230 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-14 02:37:15.244238 | orchestrator | Wednesday 14 May 2025 02:33:57 +0000 (0:00:01.131) 0:00:20.791 ********* 2025-05-14 02:37:15.244245 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.244253 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:15.244260 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:15.244267 | orchestrator | 2025-05-14 02:37:15.244275 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-14 02:37:15.244282 | orchestrator | Wednesday 14 May 2025 02:34:06 +0000 (0:00:09.145) 0:00:29.936 ********* 2025-05-14 02:37:15.244298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:37:15.244305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:37:15.244319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:37:15.244331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:37:15.244337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:37:15.244347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:37:15.244354 | orchestrator | 2025-05-14 02:37:15.244360 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-14 02:37:15.244435 | orchestrator | Wednesday 14 May 2025 02:34:10 +0000 (0:00:04.401) 0:00:34.338 ********* 2025-05-14 02:37:15.244442 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.244448 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:15.244454 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:15.244460 | orchestrator | 2025-05-14 02:37:15.244467 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-14 02:37:15.244472 | orchestrator | Wednesday 14 May 2025 02:34:11 +0000 (0:00:01.054) 0:00:35.392 ********* 2025-05-14 02:37:15.244478 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.244485 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:15.244491 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:15.244496 | orchestrator | 2025-05-14 02:37:15.244502 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-14 02:37:15.244509 | orchestrator | Wednesday 14 May 2025 02:34:12 +0000 (0:00:00.585) 0:00:35.977 ********* 2025-05-14 02:37:15.244514 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.244520 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:15.244526 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:15.244532 | orchestrator | 2025-05-14 02:37:15.244538 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-14 02:37:15.244544 | orchestrator | Wednesday 14 May 2025 02:34:12 +0000 (0:00:00.387) 0:00:36.365 ********* 2025-05-14 02:37:15.244552 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-14 02:37:15.244559 | orchestrator | ...ignoring 2025-05-14 02:37:15.244566 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-14 02:37:15.244572 | orchestrator | ...ignoring 2025-05-14 02:37:15.244578 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-14 02:37:15.244584 | orchestrator | ...ignoring 2025-05-14 02:37:15.244635 | orchestrator | 2025-05-14 02:37:15.244651 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-14 02:37:15.244656 | orchestrator | Wednesday 14 May 2025 02:34:23 +0000 (0:00:11.087) 0:00:47.452 ********* 2025-05-14 02:37:15.244662 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.244668 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:15.244674 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:15.244680 | orchestrator | 2025-05-14 02:37:15.244686 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-14 02:37:15.244692 | orchestrator | Wednesday 14 May 2025 02:34:24 +0000 (0:00:00.631) 0:00:48.083 ********* 2025-05-14 02:37:15.244698 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:15.244704 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.244710 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.244716 | orchestrator | 2025-05-14 02:37:15.244722 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-14 02:37:15.244728 | orchestrator | Wednesday 14 May 2025 02:34:24 +0000 (0:00:00.549) 0:00:48.633 ********* 2025-05-14 02:37:15.244741 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:15.244747 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.244753 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.244759 | orchestrator | 2025-05-14 02:37:15.244771 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-14 02:37:15.244777 | orchestrator | Wednesday 14 May 2025 02:34:25 +0000 (0:00:00.497) 0:00:49.131 ********* 2025-05-14 02:37:15.244783 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:15.244789 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.244796 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.244801 | orchestrator | 2025-05-14 02:37:15.244808 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-14 02:37:15.244813 | orchestrator | Wednesday 14 May 2025 02:34:26 +0000 (0:00:00.627) 0:00:49.759 ********* 2025-05-14 02:37:15.244820 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.244826 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:15.244831 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:15.244837 | orchestrator | 2025-05-14 02:37:15.244844 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-14 02:37:15.244851 | orchestrator | Wednesday 14 May 2025 02:34:26 +0000 (0:00:00.734) 0:00:50.494 ********* 2025-05-14 02:37:15.244857 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:15.244863 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.244869 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.244876 | orchestrator | 2025-05-14 02:37:15.244882 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 02:37:15.244888 | orchestrator | Wednesday 14 May 2025 02:34:27 +0000 (0:00:00.705) 0:00:51.199 ********* 2025-05-14 02:37:15.244895 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.244901 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.244907 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-14 02:37:15.244914 | orchestrator | 2025-05-14 02:37:15.244920 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-14 02:37:15.244926 | orchestrator | Wednesday 14 May 2025 02:34:28 +0000 (0:00:00.546) 0:00:51.746 ********* 2025-05-14 02:37:15.244933 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.244939 | orchestrator | 2025-05-14 02:37:15.244945 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-14 02:37:15.244952 | orchestrator | Wednesday 14 May 2025 02:34:38 +0000 (0:00:10.536) 0:01:02.282 ********* 2025-05-14 02:37:15.244958 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.244964 | orchestrator | 2025-05-14 02:37:15.244970 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 02:37:15.244976 | orchestrator | Wednesday 14 May 2025 02:34:38 +0000 (0:00:00.134) 0:01:02.417 ********* 2025-05-14 02:37:15.244983 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:15.244989 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.244995 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.245000 | orchestrator | 2025-05-14 02:37:15.245007 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-14 02:37:15.245013 | orchestrator | Wednesday 14 May 2025 02:34:39 +0000 (0:00:01.050) 0:01:03.467 ********* 2025-05-14 02:37:15.245019 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.245025 | orchestrator | 2025-05-14 02:37:15.245031 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-14 02:37:15.245037 | orchestrator | Wednesday 14 May 2025 02:34:50 +0000 (0:00:10.769) 0:01:14.237 ********* 2025-05-14 02:37:15.245043 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-05-14 02:37:15.245050 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.245056 | orchestrator | 2025-05-14 02:37:15.245063 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-14 02:37:15.245075 | orchestrator | Wednesday 14 May 2025 02:34:57 +0000 (0:00:07.188) 0:01:21.425 ********* 2025-05-14 02:37:15.245082 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.245090 | orchestrator | 2025-05-14 02:37:15.245098 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-14 02:37:15.245105 | orchestrator | Wednesday 14 May 2025 02:35:00 +0000 (0:00:02.275) 0:01:23.700 ********* 2025-05-14 02:37:15.245113 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.245120 | orchestrator | 2025-05-14 02:37:15.245127 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-14 02:37:15.245135 | orchestrator | Wednesday 14 May 2025 02:35:00 +0000 (0:00:00.119) 0:01:23.820 ********* 2025-05-14 02:37:15.245143 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:15.245150 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.245158 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.245166 | orchestrator | 2025-05-14 02:37:15.245173 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-14 02:37:15.245181 | orchestrator | Wednesday 14 May 2025 02:35:00 +0000 (0:00:00.483) 0:01:24.303 ********* 2025-05-14 02:37:15.245188 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:15.245197 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:15.245204 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:15.245212 | orchestrator | 2025-05-14 02:37:15.245222 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-14 02:37:15.245230 | orchestrator | Wednesday 14 May 2025 02:35:01 +0000 (0:00:00.421) 0:01:24.724 ********* 2025-05-14 02:37:15.245238 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-14 02:37:15.245245 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:15.245253 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:15.245260 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.245268 | orchestrator | 2025-05-14 02:37:15.245275 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-14 02:37:15.245282 | orchestrator | skipping: no hosts matched 2025-05-14 02:37:15.245290 | orchestrator | 2025-05-14 02:37:15.245297 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-14 02:37:15.245305 | orchestrator | 2025-05-14 02:37:15.245312 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-14 02:37:15.245320 | orchestrator | Wednesday 14 May 2025 02:35:17 +0000 (0:00:16.471) 0:01:41.195 ********* 2025-05-14 02:37:15.245327 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:15.245334 | orchestrator | 2025-05-14 02:37:15.245345 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-14 02:37:15.245352 | orchestrator | Wednesday 14 May 2025 02:35:36 +0000 (0:00:18.976) 0:02:00.171 ********* 2025-05-14 02:37:15.245358 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:15.245364 | orchestrator | 2025-05-14 02:37:15.245370 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-14 02:37:15.245376 | orchestrator | Wednesday 14 May 2025 02:35:57 +0000 (0:00:20.556) 0:02:20.727 ********* 2025-05-14 02:37:15.245382 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:15.245388 | orchestrator | 2025-05-14 02:37:15.245394 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-14 02:37:15.245400 | orchestrator | 2025-05-14 02:37:15.245407 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-14 02:37:15.245413 | orchestrator | Wednesday 14 May 2025 02:35:59 +0000 (0:00:02.716) 0:02:23.444 ********* 2025-05-14 02:37:15.245419 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:15.245424 | orchestrator | 2025-05-14 02:37:15.245431 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-14 02:37:15.245437 | orchestrator | Wednesday 14 May 2025 02:36:16 +0000 (0:00:16.608) 0:02:40.053 ********* 2025-05-14 02:37:15.245443 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:15.245449 | orchestrator | 2025-05-14 02:37:15.245455 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-14 02:37:15.245466 | orchestrator | Wednesday 14 May 2025 02:36:36 +0000 (0:00:20.569) 0:03:00.622 ********* 2025-05-14 02:37:15.245472 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:15.245479 | orchestrator | 2025-05-14 02:37:15.245485 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-14 02:37:15.245490 | orchestrator | 2025-05-14 02:37:15.245497 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-14 02:37:15.245503 | orchestrator | Wednesday 14 May 2025 02:36:39 +0000 (0:00:02.446) 0:03:03.068 ********* 2025-05-14 02:37:15.245509 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.245514 | orchestrator | 2025-05-14 02:37:15.245521 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-14 02:37:15.245527 | orchestrator | Wednesday 14 May 2025 02:36:52 +0000 (0:00:12.687) 0:03:15.755 ********* 2025-05-14 02:37:15.245533 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.245539 | orchestrator | 2025-05-14 02:37:15.245545 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-14 02:37:15.245551 | orchestrator | Wednesday 14 May 2025 02:36:56 +0000 (0:00:04.539) 0:03:20.295 ********* 2025-05-14 02:37:15.245557 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.245563 | orchestrator | 2025-05-14 02:37:15.245569 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-14 02:37:15.245575 | orchestrator | 2025-05-14 02:37:15.245581 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-14 02:37:15.245587 | orchestrator | Wednesday 14 May 2025 02:36:59 +0000 (0:00:02.619) 0:03:22.914 ********* 2025-05-14 02:37:15.245609 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:15.245615 | orchestrator | 2025-05-14 02:37:15.245621 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-14 02:37:15.245627 | orchestrator | Wednesday 14 May 2025 02:37:00 +0000 (0:00:00.778) 0:03:23.692 ********* 2025-05-14 02:37:15.245633 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.245639 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.245645 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.245651 | orchestrator | 2025-05-14 02:37:15.245657 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-14 02:37:15.245663 | orchestrator | Wednesday 14 May 2025 02:37:02 +0000 (0:00:02.724) 0:03:26.417 ********* 2025-05-14 02:37:15.245669 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.245675 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.245681 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.245687 | orchestrator | 2025-05-14 02:37:15.245693 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-14 02:37:15.245699 | orchestrator | Wednesday 14 May 2025 02:37:04 +0000 (0:00:02.254) 0:03:28.671 ********* 2025-05-14 02:37:15.245705 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.245712 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.245718 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.245723 | orchestrator | 2025-05-14 02:37:15.245729 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-14 02:37:15.245736 | orchestrator | Wednesday 14 May 2025 02:37:07 +0000 (0:00:02.489) 0:03:31.161 ********* 2025-05-14 02:37:15.245742 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.245747 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.245753 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:15.245759 | orchestrator | 2025-05-14 02:37:15.245765 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-14 02:37:15.245771 | orchestrator | Wednesday 14 May 2025 02:37:09 +0000 (0:00:02.265) 0:03:33.426 ********* 2025-05-14 02:37:15.245776 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:15.245782 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:15.245791 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:15.245797 | orchestrator | 2025-05-14 02:37:15.245802 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-14 02:37:15.245813 | orchestrator | Wednesday 14 May 2025 02:37:13 +0000 (0:00:03.688) 0:03:37.114 ********* 2025-05-14 02:37:15.245819 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:15.245825 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:15.245831 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:15.245836 | orchestrator | 2025-05-14 02:37:15.245842 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:37:15.245848 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-14 02:37:15.245855 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-14 02:37:15.245867 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-14 02:37:15.245873 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-14 02:37:15.245878 | orchestrator | 2025-05-14 02:37:15.245884 | orchestrator | 2025-05-14 02:37:15.245892 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:37:15.245904 | orchestrator | Wednesday 14 May 2025 02:37:13 +0000 (0:00:00.451) 0:03:37.566 ********* 2025-05-14 02:37:15.245910 | orchestrator | =============================================================================== 2025-05-14 02:37:15.245916 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.13s 2025-05-14 02:37:15.245921 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.58s 2025-05-14 02:37:15.245927 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 16.47s 2025-05-14 02:37:15.245933 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.69s 2025-05-14 02:37:15.245939 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.09s 2025-05-14 02:37:15.245945 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.77s 2025-05-14 02:37:15.245951 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.54s 2025-05-14 02:37:15.245957 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 9.15s 2025-05-14 02:37:15.245962 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.19s 2025-05-14 02:37:15.245968 | orchestrator | mariadb : Copying over config.json files for services ------------------- 6.72s 2025-05-14 02:37:15.245974 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.16s 2025-05-14 02:37:15.245980 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 5.12s 2025-05-14 02:37:15.245986 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.54s 2025-05-14 02:37:15.245992 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.40s 2025-05-14 02:37:15.245998 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.69s 2025-05-14 02:37:15.246004 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.72s 2025-05-14 02:37:15.246010 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.62s 2025-05-14 02:37:15.246109 | orchestrator | Check MariaDB service --------------------------------------------------- 2.56s 2025-05-14 02:37:15.246116 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.49s 2025-05-14 02:37:15.246122 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.28s 2025-05-14 02:37:15.246128 | orchestrator | 2025-05-14 02:37:15 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:15.246135 | orchestrator | 2025-05-14 02:37:15 | INFO  | Task 36729c45-6f2a-45bb-baac-0d5ba2f8cd22 is in state SUCCESS 2025-05-14 02:37:15.246152 | orchestrator | 2025-05-14 02:37:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:18.298321 | orchestrator | 2025-05-14 02:37:18 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:18.298700 | orchestrator | 2025-05-14 02:37:18 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:18.301308 | orchestrator | 2025-05-14 02:37:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:18.303799 | orchestrator | 2025-05-14 02:37:18 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:18.303855 | orchestrator | 2025-05-14 02:37:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:21.340374 | orchestrator | 2025-05-14 02:37:21 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:21.342222 | orchestrator | 2025-05-14 02:37:21 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:21.346563 | orchestrator | 2025-05-14 02:37:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:21.348196 | orchestrator | 2025-05-14 02:37:21 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:21.348502 | orchestrator | 2025-05-14 02:37:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:24.379377 | orchestrator | 2025-05-14 02:37:24 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:24.379527 | orchestrator | 2025-05-14 02:37:24 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:24.380696 | orchestrator | 2025-05-14 02:37:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:24.381681 | orchestrator | 2025-05-14 02:37:24 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:24.381771 | orchestrator | 2025-05-14 02:37:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:27.435066 | orchestrator | 2025-05-14 02:37:27 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:27.435309 | orchestrator | 2025-05-14 02:37:27 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:27.435890 | orchestrator | 2025-05-14 02:37:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:27.441881 | orchestrator | 2025-05-14 02:37:27 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:27.441940 | orchestrator | 2025-05-14 02:37:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:30.477006 | orchestrator | 2025-05-14 02:37:30 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:30.477884 | orchestrator | 2025-05-14 02:37:30 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:30.479331 | orchestrator | 2025-05-14 02:37:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:30.481952 | orchestrator | 2025-05-14 02:37:30 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:30.481998 | orchestrator | 2025-05-14 02:37:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:33.523693 | orchestrator | 2025-05-14 02:37:33 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:33.524957 | orchestrator | 2025-05-14 02:37:33 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:33.527941 | orchestrator | 2025-05-14 02:37:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:33.528744 | orchestrator | 2025-05-14 02:37:33 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:33.530379 | orchestrator | 2025-05-14 02:37:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:36.565821 | orchestrator | 2025-05-14 02:37:36 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:36.565913 | orchestrator | 2025-05-14 02:37:36 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:36.566830 | orchestrator | 2025-05-14 02:37:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:36.567523 | orchestrator | 2025-05-14 02:37:36 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:36.567560 | orchestrator | 2025-05-14 02:37:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:39.608114 | orchestrator | 2025-05-14 02:37:39 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:39.608268 | orchestrator | 2025-05-14 02:37:39 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:39.610941 | orchestrator | 2025-05-14 02:37:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:39.611783 | orchestrator | 2025-05-14 02:37:39 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:39.611817 | orchestrator | 2025-05-14 02:37:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:42.653329 | orchestrator | 2025-05-14 02:37:42 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:42.654563 | orchestrator | 2025-05-14 02:37:42 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:42.655972 | orchestrator | 2025-05-14 02:37:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:42.656882 | orchestrator | 2025-05-14 02:37:42 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:42.657012 | orchestrator | 2025-05-14 02:37:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:45.701068 | orchestrator | 2025-05-14 02:37:45 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:45.704757 | orchestrator | 2025-05-14 02:37:45 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:45.706205 | orchestrator | 2025-05-14 02:37:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:45.707504 | orchestrator | 2025-05-14 02:37:45 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:45.707671 | orchestrator | 2025-05-14 02:37:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:48.751788 | orchestrator | 2025-05-14 02:37:48 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:48.752449 | orchestrator | 2025-05-14 02:37:48 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:48.754177 | orchestrator | 2025-05-14 02:37:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:48.755462 | orchestrator | 2025-05-14 02:37:48 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:48.756123 | orchestrator | 2025-05-14 02:37:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:51.802463 | orchestrator | 2025-05-14 02:37:51 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:51.802786 | orchestrator | 2025-05-14 02:37:51 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:51.804969 | orchestrator | 2025-05-14 02:37:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:51.805492 | orchestrator | 2025-05-14 02:37:51 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:51.805531 | orchestrator | 2025-05-14 02:37:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:54.855428 | orchestrator | 2025-05-14 02:37:54 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:54.855538 | orchestrator | 2025-05-14 02:37:54 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:54.855553 | orchestrator | 2025-05-14 02:37:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:54.855565 | orchestrator | 2025-05-14 02:37:54 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:54.855576 | orchestrator | 2025-05-14 02:37:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:57.886091 | orchestrator | 2025-05-14 02:37:57 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:37:57.887199 | orchestrator | 2025-05-14 02:37:57 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:37:57.888803 | orchestrator | 2025-05-14 02:37:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:37:57.890289 | orchestrator | 2025-05-14 02:37:57 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:37:57.890409 | orchestrator | 2025-05-14 02:37:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:00.942110 | orchestrator | 2025-05-14 02:38:00 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:00.942533 | orchestrator | 2025-05-14 02:38:00 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:00.944022 | orchestrator | 2025-05-14 02:38:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:00.944897 | orchestrator | 2025-05-14 02:38:00 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:00.945004 | orchestrator | 2025-05-14 02:38:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:03.996947 | orchestrator | 2025-05-14 02:38:03 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:03.998262 | orchestrator | 2025-05-14 02:38:03 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:04.000831 | orchestrator | 2025-05-14 02:38:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:04.006456 | orchestrator | 2025-05-14 02:38:04 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:04.006707 | orchestrator | 2025-05-14 02:38:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:07.060283 | orchestrator | 2025-05-14 02:38:07 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:07.061692 | orchestrator | 2025-05-14 02:38:07 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:07.062472 | orchestrator | 2025-05-14 02:38:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:07.063333 | orchestrator | 2025-05-14 02:38:07 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:07.063424 | orchestrator | 2025-05-14 02:38:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:10.110223 | orchestrator | 2025-05-14 02:38:10 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:10.111054 | orchestrator | 2025-05-14 02:38:10 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:10.112408 | orchestrator | 2025-05-14 02:38:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:10.114111 | orchestrator | 2025-05-14 02:38:10 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:10.114165 | orchestrator | 2025-05-14 02:38:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:13.164124 | orchestrator | 2025-05-14 02:38:13 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:13.166227 | orchestrator | 2025-05-14 02:38:13 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:13.168085 | orchestrator | 2025-05-14 02:38:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:13.171140 | orchestrator | 2025-05-14 02:38:13 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:13.171174 | orchestrator | 2025-05-14 02:38:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:16.207310 | orchestrator | 2025-05-14 02:38:16 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:16.207410 | orchestrator | 2025-05-14 02:38:16 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:16.207788 | orchestrator | 2025-05-14 02:38:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:16.208720 | orchestrator | 2025-05-14 02:38:16 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:16.208808 | orchestrator | 2025-05-14 02:38:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:19.238287 | orchestrator | 2025-05-14 02:38:19 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:19.238844 | orchestrator | 2025-05-14 02:38:19 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:19.240463 | orchestrator | 2025-05-14 02:38:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:19.241445 | orchestrator | 2025-05-14 02:38:19 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:19.241706 | orchestrator | 2025-05-14 02:38:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:22.278562 | orchestrator | 2025-05-14 02:38:22 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:22.278979 | orchestrator | 2025-05-14 02:38:22 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:22.280954 | orchestrator | 2025-05-14 02:38:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:22.283882 | orchestrator | 2025-05-14 02:38:22 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:22.283934 | orchestrator | 2025-05-14 02:38:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:25.324077 | orchestrator | 2025-05-14 02:38:25 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:25.326714 | orchestrator | 2025-05-14 02:38:25 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:25.328487 | orchestrator | 2025-05-14 02:38:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:25.330228 | orchestrator | 2025-05-14 02:38:25 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:25.330295 | orchestrator | 2025-05-14 02:38:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:28.374094 | orchestrator | 2025-05-14 02:38:28 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:28.376260 | orchestrator | 2025-05-14 02:38:28 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:28.377872 | orchestrator | 2025-05-14 02:38:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:28.379710 | orchestrator | 2025-05-14 02:38:28 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:28.379756 | orchestrator | 2025-05-14 02:38:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:31.427113 | orchestrator | 2025-05-14 02:38:31 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:31.427903 | orchestrator | 2025-05-14 02:38:31 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:31.429985 | orchestrator | 2025-05-14 02:38:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:31.431319 | orchestrator | 2025-05-14 02:38:31 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:31.431726 | orchestrator | 2025-05-14 02:38:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:34.478136 | orchestrator | 2025-05-14 02:38:34 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:34.480149 | orchestrator | 2025-05-14 02:38:34 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:34.482300 | orchestrator | 2025-05-14 02:38:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:34.487368 | orchestrator | 2025-05-14 02:38:34 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:34.487435 | orchestrator | 2025-05-14 02:38:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:37.533069 | orchestrator | 2025-05-14 02:38:37 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:37.534344 | orchestrator | 2025-05-14 02:38:37 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:37.535264 | orchestrator | 2025-05-14 02:38:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:37.536566 | orchestrator | 2025-05-14 02:38:37 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:37.536650 | orchestrator | 2025-05-14 02:38:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:40.578230 | orchestrator | 2025-05-14 02:38:40 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:40.579294 | orchestrator | 2025-05-14 02:38:40 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:40.580972 | orchestrator | 2025-05-14 02:38:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:40.583657 | orchestrator | 2025-05-14 02:38:40 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:40.583713 | orchestrator | 2025-05-14 02:38:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:43.634934 | orchestrator | 2025-05-14 02:38:43 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:43.637817 | orchestrator | 2025-05-14 02:38:43 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:43.639904 | orchestrator | 2025-05-14 02:38:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:43.644759 | orchestrator | 2025-05-14 02:38:43 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:43.644867 | orchestrator | 2025-05-14 02:38:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:46.695543 | orchestrator | 2025-05-14 02:38:46 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:46.696360 | orchestrator | 2025-05-14 02:38:46 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:46.699950 | orchestrator | 2025-05-14 02:38:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:46.701286 | orchestrator | 2025-05-14 02:38:46 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:46.701325 | orchestrator | 2025-05-14 02:38:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:49.753540 | orchestrator | 2025-05-14 02:38:49 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:49.754942 | orchestrator | 2025-05-14 02:38:49 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:49.756848 | orchestrator | 2025-05-14 02:38:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:49.759690 | orchestrator | 2025-05-14 02:38:49 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:49.759762 | orchestrator | 2025-05-14 02:38:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:52.807631 | orchestrator | 2025-05-14 02:38:52 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:52.809253 | orchestrator | 2025-05-14 02:38:52 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:52.810769 | orchestrator | 2025-05-14 02:38:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:52.812182 | orchestrator | 2025-05-14 02:38:52 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:52.812258 | orchestrator | 2025-05-14 02:38:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:55.854086 | orchestrator | 2025-05-14 02:38:55 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:55.855630 | orchestrator | 2025-05-14 02:38:55 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:55.856840 | orchestrator | 2025-05-14 02:38:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:55.858274 | orchestrator | 2025-05-14 02:38:55 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:55.858650 | orchestrator | 2025-05-14 02:38:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:58.891649 | orchestrator | 2025-05-14 02:38:58 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state STARTED 2025-05-14 02:38:58.893712 | orchestrator | 2025-05-14 02:38:58 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:38:58.894982 | orchestrator | 2025-05-14 02:38:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:38:58.896053 | orchestrator | 2025-05-14 02:38:58 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:38:58.896383 | orchestrator | 2025-05-14 02:38:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:01.950961 | orchestrator | 2025-05-14 02:39:01 | INFO  | Task e39d9e75-9ca5-4902-af4a-d1f5d0d730d2 is in state SUCCESS 2025-05-14 02:39:01.952149 | orchestrator | 2025-05-14 02:39:01.952210 | orchestrator | 2025-05-14 02:39:01.952233 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:39:01.952254 | orchestrator | 2025-05-14 02:39:01.952349 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:39:01.952477 | orchestrator | Wednesday 14 May 2025 02:37:17 +0000 (0:00:00.363) 0:00:00.363 ********* 2025-05-14 02:39:01.952489 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.952501 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.952512 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.952523 | orchestrator | 2025-05-14 02:39:01.952534 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:39:01.952545 | orchestrator | Wednesday 14 May 2025 02:37:18 +0000 (0:00:00.463) 0:00:00.826 ********* 2025-05-14 02:39:01.952556 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-14 02:39:01.952568 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-14 02:39:01.952634 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-14 02:39:01.952647 | orchestrator | 2025-05-14 02:39:01.952658 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-14 02:39:01.952669 | orchestrator | 2025-05-14 02:39:01.952681 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 02:39:01.952692 | orchestrator | Wednesday 14 May 2025 02:37:18 +0000 (0:00:00.417) 0:00:01.243 ********* 2025-05-14 02:39:01.952703 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:39:01.952719 | orchestrator | 2025-05-14 02:39:01.952732 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-14 02:39:01.952745 | orchestrator | Wednesday 14 May 2025 02:37:19 +0000 (0:00:00.741) 0:00:01.985 ********* 2025-05-14 02:39:01.952785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:39:01.952853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:39:01.952878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:39:01.952900 | orchestrator | 2025-05-14 02:39:01.952913 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-14 02:39:01.952926 | orchestrator | Wednesday 14 May 2025 02:37:21 +0000 (0:00:01.805) 0:00:03.790 ********* 2025-05-14 02:39:01.952940 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.952953 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.952966 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.952977 | orchestrator | 2025-05-14 02:39:01.952988 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 02:39:01.953000 | orchestrator | Wednesday 14 May 2025 02:37:21 +0000 (0:00:00.295) 0:00:04.086 ********* 2025-05-14 02:39:01.953017 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-14 02:39:01.953028 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-14 02:39:01.953039 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-14 02:39:01.953050 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-14 02:39:01.953061 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-14 02:39:01.953071 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-14 02:39:01.953082 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-14 02:39:01.953093 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-14 02:39:01.953104 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-14 02:39:01.953115 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-14 02:39:01.953125 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-14 02:39:01.953136 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-14 02:39:01.953147 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-14 02:39:01.953158 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-14 02:39:01.953169 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-14 02:39:01.953179 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-14 02:39:01.953190 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-14 02:39:01.953201 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-14 02:39:01.953217 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-14 02:39:01.953228 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-14 02:39:01.953239 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-14 02:39:01.953251 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-14 02:39:01.953264 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-14 02:39:01.953281 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-14 02:39:01.953292 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-14 02:39:01.953303 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-14 02:39:01.953315 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-14 02:39:01.953325 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-14 02:39:01.953337 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-14 02:39:01.953347 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-14 02:39:01.953358 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-14 02:39:01.953369 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-14 02:39:01.953380 | orchestrator | 2025-05-14 02:39:01.953391 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.953402 | orchestrator | Wednesday 14 May 2025 02:37:22 +0000 (0:00:00.904) 0:00:04.990 ********* 2025-05-14 02:39:01.953413 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.953424 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.953435 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.953446 | orchestrator | 2025-05-14 02:39:01.953457 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.953467 | orchestrator | Wednesday 14 May 2025 02:37:22 +0000 (0:00:00.418) 0:00:05.408 ********* 2025-05-14 02:39:01.953478 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.953490 | orchestrator | 2025-05-14 02:39:01.953506 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.953517 | orchestrator | Wednesday 14 May 2025 02:37:22 +0000 (0:00:00.120) 0:00:05.528 ********* 2025-05-14 02:39:01.953528 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.953538 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.953549 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.953560 | orchestrator | 2025-05-14 02:39:01.953571 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.953611 | orchestrator | Wednesday 14 May 2025 02:37:23 +0000 (0:00:00.361) 0:00:05.889 ********* 2025-05-14 02:39:01.953630 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.953650 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.953669 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.953687 | orchestrator | 2025-05-14 02:39:01.953707 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.953727 | orchestrator | Wednesday 14 May 2025 02:37:23 +0000 (0:00:00.253) 0:00:06.142 ********* 2025-05-14 02:39:01.953746 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.953765 | orchestrator | 2025-05-14 02:39:01.953783 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.953802 | orchestrator | Wednesday 14 May 2025 02:37:23 +0000 (0:00:00.107) 0:00:06.250 ********* 2025-05-14 02:39:01.953821 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.953842 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.953875 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.953897 | orchestrator | 2025-05-14 02:39:01.953913 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.953924 | orchestrator | Wednesday 14 May 2025 02:37:23 +0000 (0:00:00.472) 0:00:06.723 ********* 2025-05-14 02:39:01.953935 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.953946 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.953957 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.953968 | orchestrator | 2025-05-14 02:39:01.953979 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.953990 | orchestrator | Wednesday 14 May 2025 02:37:24 +0000 (0:00:00.554) 0:00:07.277 ********* 2025-05-14 02:39:01.954001 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.954012 | orchestrator | 2025-05-14 02:39:01.954104 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.954116 | orchestrator | Wednesday 14 May 2025 02:37:24 +0000 (0:00:00.115) 0:00:07.393 ********* 2025-05-14 02:39:01.954128 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.954141 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.954162 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.954182 | orchestrator | 2025-05-14 02:39:01.954202 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.954223 | orchestrator | Wednesday 14 May 2025 02:37:25 +0000 (0:00:00.405) 0:00:07.799 ********* 2025-05-14 02:39:01.954242 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.954263 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.954280 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.954291 | orchestrator | 2025-05-14 02:39:01.954302 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.954314 | orchestrator | Wednesday 14 May 2025 02:37:25 +0000 (0:00:00.448) 0:00:08.247 ********* 2025-05-14 02:39:01.954325 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.954335 | orchestrator | 2025-05-14 02:39:01.954347 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.954357 | orchestrator | Wednesday 14 May 2025 02:37:25 +0000 (0:00:00.117) 0:00:08.365 ********* 2025-05-14 02:39:01.954368 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.954379 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.954390 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.954400 | orchestrator | 2025-05-14 02:39:01.954412 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.954431 | orchestrator | Wednesday 14 May 2025 02:37:26 +0000 (0:00:00.408) 0:00:08.773 ********* 2025-05-14 02:39:01.954450 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.954467 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.954478 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.954489 | orchestrator | 2025-05-14 02:39:01.954501 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.954512 | orchestrator | Wednesday 14 May 2025 02:37:26 +0000 (0:00:00.309) 0:00:09.083 ********* 2025-05-14 02:39:01.954529 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.954551 | orchestrator | 2025-05-14 02:39:01.954606 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.954625 | orchestrator | Wednesday 14 May 2025 02:37:26 +0000 (0:00:00.260) 0:00:09.343 ********* 2025-05-14 02:39:01.954642 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.954661 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.954680 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.954700 | orchestrator | 2025-05-14 02:39:01.954720 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.954740 | orchestrator | Wednesday 14 May 2025 02:37:26 +0000 (0:00:00.286) 0:00:09.630 ********* 2025-05-14 02:39:01.954760 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.954778 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.954798 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.954834 | orchestrator | 2025-05-14 02:39:01.954854 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.954872 | orchestrator | Wednesday 14 May 2025 02:37:27 +0000 (0:00:00.518) 0:00:10.148 ********* 2025-05-14 02:39:01.954890 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.954907 | orchestrator | 2025-05-14 02:39:01.954922 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.954941 | orchestrator | Wednesday 14 May 2025 02:37:27 +0000 (0:00:00.146) 0:00:10.294 ********* 2025-05-14 02:39:01.954959 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.954977 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.954997 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.955016 | orchestrator | 2025-05-14 02:39:01.955034 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.955051 | orchestrator | Wednesday 14 May 2025 02:37:28 +0000 (0:00:00.628) 0:00:10.923 ********* 2025-05-14 02:39:01.955073 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.955084 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.955095 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.955106 | orchestrator | 2025-05-14 02:39:01.955117 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.955133 | orchestrator | Wednesday 14 May 2025 02:37:28 +0000 (0:00:00.584) 0:00:11.508 ********* 2025-05-14 02:39:01.955153 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.955172 | orchestrator | 2025-05-14 02:39:01.955188 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.955199 | orchestrator | Wednesday 14 May 2025 02:37:28 +0000 (0:00:00.171) 0:00:11.679 ********* 2025-05-14 02:39:01.955214 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.955233 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.955253 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.955267 | orchestrator | 2025-05-14 02:39:01.955278 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.955289 | orchestrator | Wednesday 14 May 2025 02:37:29 +0000 (0:00:00.562) 0:00:12.242 ********* 2025-05-14 02:39:01.955300 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.955311 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.955322 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.955333 | orchestrator | 2025-05-14 02:39:01.955344 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.955355 | orchestrator | Wednesday 14 May 2025 02:37:29 +0000 (0:00:00.394) 0:00:12.637 ********* 2025-05-14 02:39:01.955366 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.955377 | orchestrator | 2025-05-14 02:39:01.955388 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.955398 | orchestrator | Wednesday 14 May 2025 02:37:30 +0000 (0:00:00.540) 0:00:13.178 ********* 2025-05-14 02:39:01.955508 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.955532 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.955552 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.955571 | orchestrator | 2025-05-14 02:39:01.955614 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.955625 | orchestrator | Wednesday 14 May 2025 02:37:30 +0000 (0:00:00.399) 0:00:13.577 ********* 2025-05-14 02:39:01.955636 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.955648 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.955659 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.955676 | orchestrator | 2025-05-14 02:39:01.955703 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.955724 | orchestrator | Wednesday 14 May 2025 02:37:31 +0000 (0:00:00.559) 0:00:14.136 ********* 2025-05-14 02:39:01.955743 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.955761 | orchestrator | 2025-05-14 02:39:01.955772 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.955783 | orchestrator | Wednesday 14 May 2025 02:37:31 +0000 (0:00:00.131) 0:00:14.268 ********* 2025-05-14 02:39:01.955805 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.955816 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.955831 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.955851 | orchestrator | 2025-05-14 02:39:01.955871 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.955890 | orchestrator | Wednesday 14 May 2025 02:37:31 +0000 (0:00:00.403) 0:00:14.672 ********* 2025-05-14 02:39:01.955902 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.955913 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.955924 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.955935 | orchestrator | 2025-05-14 02:39:01.955946 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.955958 | orchestrator | Wednesday 14 May 2025 02:37:32 +0000 (0:00:00.430) 0:00:15.102 ********* 2025-05-14 02:39:01.955968 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.955979 | orchestrator | 2025-05-14 02:39:01.955990 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.956000 | orchestrator | Wednesday 14 May 2025 02:37:32 +0000 (0:00:00.148) 0:00:15.251 ********* 2025-05-14 02:39:01.956011 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.956022 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.956033 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.956044 | orchestrator | 2025-05-14 02:39:01.956054 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:39:01.956065 | orchestrator | Wednesday 14 May 2025 02:37:32 +0000 (0:00:00.399) 0:00:15.650 ********* 2025-05-14 02:39:01.956076 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:01.956087 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:01.956098 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:01.956109 | orchestrator | 2025-05-14 02:39:01.956120 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:39:01.956131 | orchestrator | Wednesday 14 May 2025 02:37:33 +0000 (0:00:00.484) 0:00:16.135 ********* 2025-05-14 02:39:01.956142 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.956153 | orchestrator | 2025-05-14 02:39:01.956164 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:39:01.956175 | orchestrator | Wednesday 14 May 2025 02:37:33 +0000 (0:00:00.145) 0:00:16.281 ********* 2025-05-14 02:39:01.956186 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.956197 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.956208 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.956219 | orchestrator | 2025-05-14 02:39:01.956230 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-14 02:39:01.956241 | orchestrator | Wednesday 14 May 2025 02:37:34 +0000 (0:00:00.620) 0:00:16.901 ********* 2025-05-14 02:39:01.956251 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:39:01.956262 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:01.956273 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:39:01.956284 | orchestrator | 2025-05-14 02:39:01.956295 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-14 02:39:01.956306 | orchestrator | Wednesday 14 May 2025 02:37:37 +0000 (0:00:03.060) 0:00:19.962 ********* 2025-05-14 02:39:01.956324 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-14 02:39:01.956355 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-14 02:39:01.956375 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-14 02:39:01.956393 | orchestrator | 2025-05-14 02:39:01.956413 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-14 02:39:01.956432 | orchestrator | Wednesday 14 May 2025 02:37:40 +0000 (0:00:02.832) 0:00:22.795 ********* 2025-05-14 02:39:01.956453 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-14 02:39:01.956484 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-14 02:39:01.956500 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-14 02:39:01.956511 | orchestrator | 2025-05-14 02:39:01.956522 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-14 02:39:01.956533 | orchestrator | Wednesday 14 May 2025 02:37:43 +0000 (0:00:03.168) 0:00:25.963 ********* 2025-05-14 02:39:01.956543 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-14 02:39:01.956554 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-14 02:39:01.956565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-14 02:39:01.956654 | orchestrator | 2025-05-14 02:39:01.956678 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-14 02:39:01.956697 | orchestrator | Wednesday 14 May 2025 02:37:45 +0000 (0:00:02.445) 0:00:28.409 ********* 2025-05-14 02:39:01.956715 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.956726 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.956737 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.956748 | orchestrator | 2025-05-14 02:39:01.956759 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-14 02:39:01.956770 | orchestrator | Wednesday 14 May 2025 02:37:46 +0000 (0:00:00.468) 0:00:28.877 ********* 2025-05-14 02:39:01.956781 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.956798 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.956810 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.956820 | orchestrator | 2025-05-14 02:39:01.956874 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 02:39:01.956895 | orchestrator | Wednesday 14 May 2025 02:37:46 +0000 (0:00:00.441) 0:00:29.318 ********* 2025-05-14 02:39:01.956914 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:39:01.956971 | orchestrator | 2025-05-14 02:39:01.956988 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-14 02:39:01.957000 | orchestrator | Wednesday 14 May 2025 02:37:47 +0000 (0:00:00.690) 0:00:30.009 ********* 2025-05-14 02:39:01.957027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:39:01.957066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:39:01.957090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:39:01.957110 | orchestrator | 2025-05-14 02:39:01.957121 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-14 02:39:01.957133 | orchestrator | Wednesday 14 May 2025 02:37:49 +0000 (0:00:01.822) 0:00:31.832 ********* 2025-05-14 02:39:01.957151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:39:01.957163 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.957181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:39:01.957201 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.957226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:39:01.957245 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.957256 | orchestrator | 2025-05-14 02:39:01.957266 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-14 02:39:01.957282 | orchestrator | Wednesday 14 May 2025 02:37:50 +0000 (0:00:01.111) 0:00:32.944 ********* 2025-05-14 02:39:01.957301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:39:01.957312 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.957327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:39:01.957344 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.957368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:39:01.957380 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.957390 | orchestrator | 2025-05-14 02:39:01.957400 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-14 02:39:01.957410 | orchestrator | Wednesday 14 May 2025 02:37:51 +0000 (0:00:01.194) 0:00:34.138 ********* 2025-05-14 02:39:01.957426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:39:01.957450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:39:01.957469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:39:01.957487 | orchestrator | 2025-05-14 02:39:01.957497 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 02:39:01.957507 | orchestrator | Wednesday 14 May 2025 02:37:56 +0000 (0:00:04.808) 0:00:38.946 ********* 2025-05-14 02:39:01.957516 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:01.957526 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:01.957536 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:01.957546 | orchestrator | 2025-05-14 02:39:01.957555 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 02:39:01.957565 | orchestrator | Wednesday 14 May 2025 02:37:56 +0000 (0:00:00.340) 0:00:39.287 ********* 2025-05-14 02:39:01.957575 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:39:01.957612 | orchestrator | 2025-05-14 02:39:01.957629 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-14 02:39:01.957646 | orchestrator | Wednesday 14 May 2025 02:37:57 +0000 (0:00:00.561) 0:00:39.849 ********* 2025-05-14 02:39:01.957663 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:01.957679 | orchestrator | 2025-05-14 02:39:01.957689 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-14 02:39:01.957699 | orchestrator | Wednesday 14 May 2025 02:37:59 +0000 (0:00:02.527) 0:00:42.376 ********* 2025-05-14 02:39:01.957709 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:01.957719 | orchestrator | 2025-05-14 02:39:01.957728 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-14 02:39:01.957738 | orchestrator | Wednesday 14 May 2025 02:38:01 +0000 (0:00:02.268) 0:00:44.644 ********* 2025-05-14 02:39:01.957747 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:01.957757 | orchestrator | 2025-05-14 02:39:01.957767 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-14 02:39:01.957777 | orchestrator | Wednesday 14 May 2025 02:38:16 +0000 (0:00:14.458) 0:00:59.103 ********* 2025-05-14 02:39:01.957786 | orchestrator | 2025-05-14 02:39:01.957802 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-14 02:39:01.957812 | orchestrator | Wednesday 14 May 2025 02:38:16 +0000 (0:00:00.053) 0:00:59.156 ********* 2025-05-14 02:39:01.957823 | orchestrator | 2025-05-14 02:39:01.957839 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-14 02:39:01.957856 | orchestrator | Wednesday 14 May 2025 02:38:16 +0000 (0:00:00.137) 0:00:59.294 ********* 2025-05-14 02:39:01.957871 | orchestrator | 2025-05-14 02:39:01.957885 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-14 02:39:01.957908 | orchestrator | Wednesday 14 May 2025 02:38:16 +0000 (0:00:00.050) 0:00:59.345 ********* 2025-05-14 02:39:01.957918 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:01.957928 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:39:01.957938 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:39:01.957947 | orchestrator | 2025-05-14 02:39:01.957957 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:39:01.957968 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-14 02:39:01.957979 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-14 02:39:01.957988 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-14 02:39:01.957998 | orchestrator | 2025-05-14 02:39:01.958008 | orchestrator | 2025-05-14 02:39:01.958066 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:39:01.958079 | orchestrator | Wednesday 14 May 2025 02:38:58 +0000 (0:00:41.947) 0:01:41.292 ********* 2025-05-14 02:39:01.958089 | orchestrator | =============================================================================== 2025-05-14 02:39:01.958099 | orchestrator | horizon : Restart horizon container ------------------------------------ 41.95s 2025-05-14 02:39:01.958108 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.46s 2025-05-14 02:39:01.958118 | orchestrator | horizon : Deploy horizon container -------------------------------------- 4.81s 2025-05-14 02:39:01.958127 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.17s 2025-05-14 02:39:01.958137 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.06s 2025-05-14 02:39:01.958147 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.83s 2025-05-14 02:39:01.958160 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.53s 2025-05-14 02:39:01.958174 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.45s 2025-05-14 02:39:01.958185 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.27s 2025-05-14 02:39:01.958201 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.82s 2025-05-14 02:39:01.958219 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.81s 2025-05-14 02:39:01.958237 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.19s 2025-05-14 02:39:01.958254 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.11s 2025-05-14 02:39:01.958280 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.90s 2025-05-14 02:39:01.958298 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-05-14 02:39:01.958315 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2025-05-14 02:39:01.958332 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.63s 2025-05-14 02:39:01.958350 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.62s 2025-05-14 02:39:01.958368 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2025-05-14 02:39:01.958385 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2025-05-14 02:39:01.958403 | orchestrator | 2025-05-14 02:39:01 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:01.958426 | orchestrator | 2025-05-14 02:39:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:01.958443 | orchestrator | 2025-05-14 02:39:01 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:39:01.958473 | orchestrator | 2025-05-14 02:39:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:05.008445 | orchestrator | 2025-05-14 02:39:05 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:05.009249 | orchestrator | 2025-05-14 02:39:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:05.010827 | orchestrator | 2025-05-14 02:39:05 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:39:05.010931 | orchestrator | 2025-05-14 02:39:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:08.064064 | orchestrator | 2025-05-14 02:39:08 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:08.066331 | orchestrator | 2025-05-14 02:39:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:08.066814 | orchestrator | 2025-05-14 02:39:08 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:39:08.066842 | orchestrator | 2025-05-14 02:39:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:11.110940 | orchestrator | 2025-05-14 02:39:11 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:11.112807 | orchestrator | 2025-05-14 02:39:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:11.115077 | orchestrator | 2025-05-14 02:39:11 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:39:11.115361 | orchestrator | 2025-05-14 02:39:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:14.157500 | orchestrator | 2025-05-14 02:39:14 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:14.158840 | orchestrator | 2025-05-14 02:39:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:14.160718 | orchestrator | 2025-05-14 02:39:14 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state STARTED 2025-05-14 02:39:14.160871 | orchestrator | 2025-05-14 02:39:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:17.230312 | orchestrator | 2025-05-14 02:39:17 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:17.232031 | orchestrator | 2025-05-14 02:39:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:17.234196 | orchestrator | 2025-05-14 02:39:17.234250 | orchestrator | 2025-05-14 02:39:17 | INFO  | Task c45d9fd8-9960-42fa-a05d-9a954dbde9fd is in state SUCCESS 2025-05-14 02:39:17.236165 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:39:17.236216 | orchestrator | 2025-05-14 02:39:17.236238 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-14 02:39:17.236257 | orchestrator | 2025-05-14 02:39:17.236274 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-14 02:39:17.236293 | orchestrator | Wednesday 14 May 2025 02:37:04 +0000 (0:00:01.191) 0:00:01.191 ********* 2025-05-14 02:39:17.236311 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:39:17.236330 | orchestrator | 2025-05-14 02:39:17.236350 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-14 02:39:17.236370 | orchestrator | Wednesday 14 May 2025 02:37:05 +0000 (0:00:00.518) 0:00:01.709 ********* 2025-05-14 02:39:17.236389 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-14 02:39:17.236440 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-14 02:39:17.236453 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-14 02:39:17.236492 | orchestrator | 2025-05-14 02:39:17.236504 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-14 02:39:17.236515 | orchestrator | Wednesday 14 May 2025 02:37:05 +0000 (0:00:00.849) 0:00:02.559 ********* 2025-05-14 02:39:17.236526 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:39:17.236537 | orchestrator | 2025-05-14 02:39:17.236548 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-14 02:39:17.236560 | orchestrator | Wednesday 14 May 2025 02:37:06 +0000 (0:00:00.716) 0:00:03.275 ********* 2025-05-14 02:39:17.236570 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.236621 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.236632 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.236643 | orchestrator | 2025-05-14 02:39:17.236654 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-14 02:39:17.236666 | orchestrator | Wednesday 14 May 2025 02:37:07 +0000 (0:00:00.616) 0:00:03.892 ********* 2025-05-14 02:39:17.236676 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.236687 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.236698 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.236709 | orchestrator | 2025-05-14 02:39:17.236720 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-14 02:39:17.236731 | orchestrator | Wednesday 14 May 2025 02:37:07 +0000 (0:00:00.323) 0:00:04.216 ********* 2025-05-14 02:39:17.236742 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.236753 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.236764 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.236774 | orchestrator | 2025-05-14 02:39:17.236788 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-14 02:39:17.236801 | orchestrator | Wednesday 14 May 2025 02:37:08 +0000 (0:00:00.829) 0:00:05.046 ********* 2025-05-14 02:39:17.236813 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.236826 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.236838 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.236851 | orchestrator | 2025-05-14 02:39:17.236864 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-14 02:39:17.236876 | orchestrator | Wednesday 14 May 2025 02:37:08 +0000 (0:00:00.308) 0:00:05.354 ********* 2025-05-14 02:39:17.236889 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.236901 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.236914 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.236928 | orchestrator | 2025-05-14 02:39:17.236941 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-14 02:39:17.236968 | orchestrator | Wednesday 14 May 2025 02:37:09 +0000 (0:00:00.330) 0:00:05.684 ********* 2025-05-14 02:39:17.236981 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.236994 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.237006 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.237018 | orchestrator | 2025-05-14 02:39:17.237031 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-14 02:39:17.237043 | orchestrator | Wednesday 14 May 2025 02:37:09 +0000 (0:00:00.321) 0:00:06.006 ********* 2025-05-14 02:39:17.237055 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.237069 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.237081 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.237093 | orchestrator | 2025-05-14 02:39:17.237106 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-14 02:39:17.237119 | orchestrator | Wednesday 14 May 2025 02:37:09 +0000 (0:00:00.536) 0:00:06.543 ********* 2025-05-14 02:39:17.237132 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.237142 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.237153 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.237164 | orchestrator | 2025-05-14 02:39:17.237175 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-14 02:39:17.237186 | orchestrator | Wednesday 14 May 2025 02:37:10 +0000 (0:00:00.315) 0:00:06.858 ********* 2025-05-14 02:39:17.237205 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 02:39:17.237219 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:39:17.237238 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:39:17.237255 | orchestrator | 2025-05-14 02:39:17.237273 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-14 02:39:17.237293 | orchestrator | Wednesday 14 May 2025 02:37:11 +0000 (0:00:00.827) 0:00:07.686 ********* 2025-05-14 02:39:17.237311 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.237329 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.237341 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.237352 | orchestrator | 2025-05-14 02:39:17.237368 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-14 02:39:17.237387 | orchestrator | Wednesday 14 May 2025 02:37:11 +0000 (0:00:00.580) 0:00:08.267 ********* 2025-05-14 02:39:17.237420 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 02:39:17.237439 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:39:17.237456 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:39:17.237475 | orchestrator | 2025-05-14 02:39:17.237494 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-14 02:39:17.237513 | orchestrator | Wednesday 14 May 2025 02:37:14 +0000 (0:00:02.401) 0:00:10.668 ********* 2025-05-14 02:39:17.237531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:39:17.237545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:39:17.237556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:39:17.237566 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.237645 | orchestrator | 2025-05-14 02:39:17.237667 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-14 02:39:17.237684 | orchestrator | Wednesday 14 May 2025 02:37:14 +0000 (0:00:00.528) 0:00:11.197 ********* 2025-05-14 02:39:17.237704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 02:39:17.237724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 02:39:17.237744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 02:39:17.237763 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.237781 | orchestrator | 2025-05-14 02:39:17.237799 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-14 02:39:17.237817 | orchestrator | Wednesday 14 May 2025 02:37:15 +0000 (0:00:00.751) 0:00:11.948 ********* 2025-05-14 02:39:17.237837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:39:17.237871 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:39:17.237904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:39:17.237917 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.237929 | orchestrator | 2025-05-14 02:39:17.237940 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-14 02:39:17.237951 | orchestrator | Wednesday 14 May 2025 02:37:15 +0000 (0:00:00.176) 0:00:12.125 ********* 2025-05-14 02:39:17.237964 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '45a4e245ab61', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-14 02:37:12.534454', 'end': '2025-05-14 02:37:12.576973', 'delta': '0:00:00.042519', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['45a4e245ab61'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-14 02:39:17.237996 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '42a03557f02e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-14 02:37:13.148698', 'end': '2025-05-14 02:37:13.191673', 'delta': '0:00:00.042975', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['42a03557f02e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-14 02:39:17.238008 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '46a6c4ca095b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-14 02:37:13.686719', 'end': '2025-05-14 02:37:13.732557', 'delta': '0:00:00.045838', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['46a6c4ca095b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-14 02:39:17.238074 | orchestrator | 2025-05-14 02:39:17.238089 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-14 02:39:17.238100 | orchestrator | Wednesday 14 May 2025 02:37:15 +0000 (0:00:00.194) 0:00:12.320 ********* 2025-05-14 02:39:17.238111 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.238123 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.238133 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.238144 | orchestrator | 2025-05-14 02:39:17.238155 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-14 02:39:17.238166 | orchestrator | Wednesday 14 May 2025 02:37:16 +0000 (0:00:00.493) 0:00:12.814 ********* 2025-05-14 02:39:17.238177 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-14 02:39:17.238195 | orchestrator | 2025-05-14 02:39:17.238206 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-14 02:39:17.238219 | orchestrator | Wednesday 14 May 2025 02:37:17 +0000 (0:00:01.419) 0:00:14.233 ********* 2025-05-14 02:39:17.238235 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238252 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.238268 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.238284 | orchestrator | 2025-05-14 02:39:17.238301 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-14 02:39:17.238317 | orchestrator | Wednesday 14 May 2025 02:37:18 +0000 (0:00:00.519) 0:00:14.752 ********* 2025-05-14 02:39:17.238334 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238345 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.238355 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.238365 | orchestrator | 2025-05-14 02:39:17.238375 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:39:17.238384 | orchestrator | Wednesday 14 May 2025 02:37:18 +0000 (0:00:00.438) 0:00:15.191 ********* 2025-05-14 02:39:17.238394 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238410 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.238419 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.238429 | orchestrator | 2025-05-14 02:39:17.238438 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-14 02:39:17.238448 | orchestrator | Wednesday 14 May 2025 02:37:18 +0000 (0:00:00.296) 0:00:15.488 ********* 2025-05-14 02:39:17.238457 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.238467 | orchestrator | 2025-05-14 02:39:17.238477 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-14 02:39:17.238486 | orchestrator | Wednesday 14 May 2025 02:37:18 +0000 (0:00:00.116) 0:00:15.604 ********* 2025-05-14 02:39:17.238496 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238505 | orchestrator | 2025-05-14 02:39:17.238515 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:39:17.238524 | orchestrator | Wednesday 14 May 2025 02:37:19 +0000 (0:00:00.237) 0:00:15.842 ********* 2025-05-14 02:39:17.238534 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238543 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.238553 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.238562 | orchestrator | 2025-05-14 02:39:17.238572 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-14 02:39:17.238606 | orchestrator | Wednesday 14 May 2025 02:37:19 +0000 (0:00:00.521) 0:00:16.364 ********* 2025-05-14 02:39:17.238615 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238625 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.238635 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.238644 | orchestrator | 2025-05-14 02:39:17.238655 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-14 02:39:17.238671 | orchestrator | Wednesday 14 May 2025 02:37:20 +0000 (0:00:00.408) 0:00:16.772 ********* 2025-05-14 02:39:17.238686 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238702 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.238719 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.238737 | orchestrator | 2025-05-14 02:39:17.238754 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-14 02:39:17.238770 | orchestrator | Wednesday 14 May 2025 02:37:20 +0000 (0:00:00.351) 0:00:17.123 ********* 2025-05-14 02:39:17.238784 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238794 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.238813 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.238823 | orchestrator | 2025-05-14 02:39:17.238833 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-14 02:39:17.238842 | orchestrator | Wednesday 14 May 2025 02:37:20 +0000 (0:00:00.307) 0:00:17.431 ********* 2025-05-14 02:39:17.238852 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238870 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.238880 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.238890 | orchestrator | 2025-05-14 02:39:17.238900 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-14 02:39:17.238909 | orchestrator | Wednesday 14 May 2025 02:37:21 +0000 (0:00:00.515) 0:00:17.947 ********* 2025-05-14 02:39:17.238919 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238928 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.238938 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.238947 | orchestrator | 2025-05-14 02:39:17.238957 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-14 02:39:17.238967 | orchestrator | Wednesday 14 May 2025 02:37:21 +0000 (0:00:00.304) 0:00:18.251 ********* 2025-05-14 02:39:17.238977 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.238986 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.238996 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.239005 | orchestrator | 2025-05-14 02:39:17.239015 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-14 02:39:17.239024 | orchestrator | Wednesday 14 May 2025 02:37:21 +0000 (0:00:00.283) 0:00:18.535 ********* 2025-05-14 02:39:17.239035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--caf94b5f--07a0--5316--9d7c--8f668ab64c5d-osd--block--caf94b5f--07a0--5316--9d7c--8f668ab64c5d', 'dm-uuid-LVM-ZTOMnjaLSd9SUt3iz7042ZI7zHa7ehDAKJlCxan9qclgcEPHFYha1Tc6FZ3eWICR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0a91196--50f5--599a--8231--3d981ca1eca9-osd--block--a0a91196--50f5--599a--8231--3d981ca1eca9', 'dm-uuid-LVM-DdSgoItp3kLzXGfqSWc7KV1e81S9ldTEsDDmSkFMuQLBzYYJUOIHqcN4rbQfCsu0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea3c2360--3d2e--5360--8839--85b817b77bc3-osd--block--ea3c2360--3d2e--5360--8839--85b817b77bc3', 'dm-uuid-LVM-ZvfW4xHeBxWJ0JwFq55oHg9Eas3fybM2dZ1b2IRfrayLX44ir6xE7p01kFSQwchZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fecac30f--087c--5b0b--83ef--f9d2b642a995-osd--block--fecac30f--087c--5b0b--83ef--f9d2b642a995', 'dm-uuid-LVM-F1SoaSTraaxmWaDqVV9hFSecVNiGPTB9OoM2w1nei0W0EK61FawRIFDaITD4sMEw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1', 'scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e4d6019-cfa5-4932-b542-f7abf313e9f1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--caf94b5f--07a0--5316--9d7c--8f668ab64c5d-osd--block--caf94b5f--07a0--5316--9d7c--8f668ab64c5d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c6J53J-1ruA-Aj0l-N1eD-BZNl-R7xJ-EDOTDH', 'scsi-0QEMU_QEMU_HARDDISK_6c9e420d-0c60-4ebc-ac19-f905b2b7a82f', 'scsi-SQEMU_QEMU_HARDDISK_6c9e420d-0c60-4ebc-ac19-f905b2b7a82f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a0a91196--50f5--599a--8231--3d981ca1eca9-osd--block--a0a91196--50f5--599a--8231--3d981ca1eca9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BubS93-4kSO-DDMx-NQ79-XP0m-dbZL-Q4o4N0', 'scsi-0QEMU_QEMU_HARDDISK_7c39c8ea-7878-4e89-b4ec-61bbe868aea7', 'scsi-SQEMU_QEMU_HARDDISK_7c39c8ea-7878-4e89-b4ec-61bbe868aea7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e31a2ff7-84d9-48c9-b0e1-1526f23b46b1', 'scsi-SQEMU_QEMU_HARDDISK_e31a2ff7-84d9-48c9-b0e1-1526f23b46b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-40-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239440 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.239460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b', 'scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part1', 'scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part14', 'scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part15', 'scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part16', 'scsi-SQEMU_QEMU_HARDDISK_4b4844e9-36f4-43ee-94f9-25fe1d60740b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ea3c2360--3d2e--5360--8839--85b817b77bc3-osd--block--ea3c2360--3d2e--5360--8839--85b817b77bc3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xuEc73-D3ov-BEsr-vtcG-WkfW-YFFh-RkQRMM', 'scsi-0QEMU_QEMU_HARDDISK_2fe9822d-742a-4109-b2fd-4f62bd011e9b', 'scsi-SQEMU_QEMU_HARDDISK_2fe9822d-742a-4109-b2fd-4f62bd011e9b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fecac30f--087c--5b0b--83ef--f9d2b642a995-osd--block--fecac30f--087c--5b0b--83ef--f9d2b642a995'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6rroif-CDia-e2Uf-a8Gz-xMld-tzkG-Fpt7Ie', 'scsi-0QEMU_QEMU_HARDDISK_4bf8951c-ead1-422f-8e98-563fd238f873', 'scsi-SQEMU_QEMU_HARDDISK_4bf8951c-ead1-422f-8e98-563fd238f873'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9158ba9c-f661-457a-83a0-7301d2e715e9', 'scsi-SQEMU_QEMU_HARDDISK_9158ba9c-f661-457a-83a0-7301d2e715e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-40-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239533 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.239549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03d77871--dede--5752--b4dd--afb6f86d8bca-osd--block--03d77871--dede--5752--b4dd--afb6f86d8bca', 'dm-uuid-LVM-IDAJ819ekzEGVYidaDTaD9Y5ZOiWCmRfi1FSgSb4gPJkINyqialcVodaMedaJccO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c7e27ae--f126--51b5--99e7--7e9908cad598-osd--block--0c7e27ae--f126--51b5--99e7--7e9908cad598', 'dm-uuid-LVM-XBp7l9yF39H6kNCz4oRlhe3vRMb8Tg516CaEYxFsVRTfIpPKJFIUvwBRmiKncpNN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:17.239735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53', 'scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part1', 'scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part14', 'scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part15', 'scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part16', 'scsi-SQEMU_QEMU_HARDDISK_d343cbf4-64a5-4d74-aedc-ee3edf681b53-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--03d77871--dede--5752--b4dd--afb6f86d8bca-osd--block--03d77871--dede--5752--b4dd--afb6f86d8bca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yDHJI6-oSxo-4ect-JY34-UAmO-wAly-O0hYht', 'scsi-0QEMU_QEMU_HARDDISK_7d716f79-cf1d-4cd5-9251-d30dd616fe8c', 'scsi-SQEMU_QEMU_HARDDISK_7d716f79-cf1d-4cd5-9251-d30dd616fe8c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0c7e27ae--f126--51b5--99e7--7e9908cad598-osd--block--0c7e27ae--f126--51b5--99e7--7e9908cad598'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JF6WJR-Mtz9-qQlp-5BLF-QwSQ-lxFw-fvvc0G', 'scsi-0QEMU_QEMU_HARDDISK_276d5307-5ea7-4279-8794-03223ea8507b', 'scsi-SQEMU_QEMU_HARDDISK_276d5307-5ea7-4279-8794-03223ea8507b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07a08b1a-3bd9-437e-a737-9a0e3fc440bf', 'scsi-SQEMU_QEMU_HARDDISK_07a08b1a-3bd9-437e-a737-9a0e3fc440bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-40-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:17.239831 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.239850 | orchestrator | 2025-05-14 02:39:17.239865 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-14 02:39:17.239881 | orchestrator | Wednesday 14 May 2025 02:37:22 +0000 (0:00:00.642) 0:00:19.177 ********* 2025-05-14 02:39:17.239897 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-14 02:39:17.239910 | orchestrator | 2025-05-14 02:39:17.239924 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-14 02:39:17.239940 | orchestrator | Wednesday 14 May 2025 02:37:24 +0000 (0:00:01.510) 0:00:20.687 ********* 2025-05-14 02:39:17.239956 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.239973 | orchestrator | 2025-05-14 02:39:17.239990 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-14 02:39:17.240006 | orchestrator | Wednesday 14 May 2025 02:37:24 +0000 (0:00:00.152) 0:00:20.840 ********* 2025-05-14 02:39:17.240019 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.240029 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.240039 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.240048 | orchestrator | 2025-05-14 02:39:17.240058 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-14 02:39:17.240068 | orchestrator | Wednesday 14 May 2025 02:37:24 +0000 (0:00:00.366) 0:00:21.207 ********* 2025-05-14 02:39:17.240077 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.240087 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.240097 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.240107 | orchestrator | 2025-05-14 02:39:17.240116 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-14 02:39:17.240135 | orchestrator | Wednesday 14 May 2025 02:37:25 +0000 (0:00:00.658) 0:00:21.865 ********* 2025-05-14 02:39:17.240145 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.240155 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.240165 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.240175 | orchestrator | 2025-05-14 02:39:17.240184 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:39:17.240194 | orchestrator | Wednesday 14 May 2025 02:37:25 +0000 (0:00:00.299) 0:00:22.164 ********* 2025-05-14 02:39:17.240204 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.240213 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.240223 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.240233 | orchestrator | 2025-05-14 02:39:17.240243 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:39:17.240253 | orchestrator | Wednesday 14 May 2025 02:37:26 +0000 (0:00:00.907) 0:00:23.071 ********* 2025-05-14 02:39:17.240263 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.240282 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.240292 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.240301 | orchestrator | 2025-05-14 02:39:17.240311 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:39:17.240321 | orchestrator | Wednesday 14 May 2025 02:37:26 +0000 (0:00:00.298) 0:00:23.370 ********* 2025-05-14 02:39:17.240331 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.240340 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.240350 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.240360 | orchestrator | 2025-05-14 02:39:17.240369 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:39:17.240379 | orchestrator | Wednesday 14 May 2025 02:37:27 +0000 (0:00:00.512) 0:00:23.883 ********* 2025-05-14 02:39:17.240389 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.240398 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.240408 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.240418 | orchestrator | 2025-05-14 02:39:17.240428 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-14 02:39:17.240438 | orchestrator | Wednesday 14 May 2025 02:37:27 +0000 (0:00:00.342) 0:00:24.225 ********* 2025-05-14 02:39:17.240447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:39:17.240457 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:39:17.240467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:39:17.240477 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:39:17.240486 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:39:17.240496 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.240505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:39:17.240515 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.240525 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:39:17.240535 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:39:17.240545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:39:17.240554 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.240564 | orchestrator | 2025-05-14 02:39:17.240599 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-14 02:39:17.240620 | orchestrator | Wednesday 14 May 2025 02:37:28 +0000 (0:00:01.115) 0:00:25.340 ********* 2025-05-14 02:39:17.240630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:39:17.240640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:39:17.240649 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:39:17.240659 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:39:17.240668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:39:17.240684 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.240694 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:39:17.240703 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:39:17.240713 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:39:17.240722 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.240732 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:39:17.240741 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.240751 | orchestrator | 2025-05-14 02:39:17.240761 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-14 02:39:17.240770 | orchestrator | Wednesday 14 May 2025 02:37:29 +0000 (0:00:00.767) 0:00:26.108 ********* 2025-05-14 02:39:17.240780 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-14 02:39:17.240789 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-14 02:39:17.240799 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-14 02:39:17.240809 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-14 02:39:17.240818 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-14 02:39:17.240828 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-14 02:39:17.240837 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-14 02:39:17.240847 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-14 02:39:17.240856 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-14 02:39:17.240866 | orchestrator | 2025-05-14 02:39:17.240875 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-14 02:39:17.240885 | orchestrator | Wednesday 14 May 2025 02:37:31 +0000 (0:00:01.878) 0:00:27.987 ********* 2025-05-14 02:39:17.240894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:39:17.240904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:39:17.240914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:39:17.240923 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:39:17.240932 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:39:17.240942 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:39:17.240952 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.240961 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.240971 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:39:17.240980 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:39:17.240990 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:39:17.240999 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.241009 | orchestrator | 2025-05-14 02:39:17.241018 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-14 02:39:17.241028 | orchestrator | Wednesday 14 May 2025 02:37:31 +0000 (0:00:00.587) 0:00:28.575 ********* 2025-05-14 02:39:17.241038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:39:17.241052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:39:17.241148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:39:17.241169 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:39:17.241185 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:39:17.241199 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.241214 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:39:17.241231 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.241249 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:39:17.241265 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:39:17.241294 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:39:17.241310 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.241324 | orchestrator | 2025-05-14 02:39:17.241334 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-14 02:39:17.241344 | orchestrator | Wednesday 14 May 2025 02:37:32 +0000 (0:00:00.416) 0:00:28.992 ********* 2025-05-14 02:39:17.241354 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:39:17.241365 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:39:17.241375 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:39:17.241385 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.241395 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:39:17.241405 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:39:17.241414 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:39:17.241424 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.241438 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:39:17.241465 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:39:17.241482 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:39:17.241497 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.241513 | orchestrator | 2025-05-14 02:39:17.241530 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-14 02:39:17.241547 | orchestrator | Wednesday 14 May 2025 02:37:32 +0000 (0:00:00.411) 0:00:29.403 ********* 2025-05-14 02:39:17.241564 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:39:17.241617 | orchestrator | 2025-05-14 02:39:17.241629 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:39:17.241639 | orchestrator | Wednesday 14 May 2025 02:37:33 +0000 (0:00:00.711) 0:00:30.115 ********* 2025-05-14 02:39:17.241649 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.241659 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.241671 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.241688 | orchestrator | 2025-05-14 02:39:17.241705 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:39:17.241718 | orchestrator | Wednesday 14 May 2025 02:37:33 +0000 (0:00:00.302) 0:00:30.417 ********* 2025-05-14 02:39:17.241741 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.241762 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.241779 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.241796 | orchestrator | 2025-05-14 02:39:17.241814 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:39:17.241830 | orchestrator | Wednesday 14 May 2025 02:37:34 +0000 (0:00:00.323) 0:00:30.741 ********* 2025-05-14 02:39:17.241847 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.241858 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.241868 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.241877 | orchestrator | 2025-05-14 02:39:17.241887 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:39:17.241897 | orchestrator | Wednesday 14 May 2025 02:37:34 +0000 (0:00:00.377) 0:00:31.118 ********* 2025-05-14 02:39:17.241906 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.241918 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.241938 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.241961 | orchestrator | 2025-05-14 02:39:17.241976 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:39:17.242004 | orchestrator | Wednesday 14 May 2025 02:37:35 +0000 (0:00:00.825) 0:00:31.944 ********* 2025-05-14 02:39:17.242070 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:39:17.242083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:39:17.242093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:39:17.242102 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.242112 | orchestrator | 2025-05-14 02:39:17.242122 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:39:17.242132 | orchestrator | Wednesday 14 May 2025 02:37:35 +0000 (0:00:00.403) 0:00:32.349 ********* 2025-05-14 02:39:17.242142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:39:17.242151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:39:17.242161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:39:17.242171 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.242180 | orchestrator | 2025-05-14 02:39:17.242192 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:39:17.242218 | orchestrator | Wednesday 14 May 2025 02:37:36 +0000 (0:00:00.398) 0:00:32.747 ********* 2025-05-14 02:39:17.242233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:39:17.242300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:39:17.242314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:39:17.242324 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.242333 | orchestrator | 2025-05-14 02:39:17.242343 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:39:17.242353 | orchestrator | Wednesday 14 May 2025 02:37:36 +0000 (0:00:00.365) 0:00:33.113 ********* 2025-05-14 02:39:17.242362 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:39:17.242372 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:39:17.242381 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:39:17.242391 | orchestrator | 2025-05-14 02:39:17.242400 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:39:17.242410 | orchestrator | Wednesday 14 May 2025 02:37:36 +0000 (0:00:00.301) 0:00:33.414 ********* 2025-05-14 02:39:17.242419 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-14 02:39:17.242429 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 02:39:17.242438 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-14 02:39:17.242452 | orchestrator | 2025-05-14 02:39:17.242468 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:39:17.242484 | orchestrator | Wednesday 14 May 2025 02:37:37 +0000 (0:00:00.648) 0:00:34.063 ********* 2025-05-14 02:39:17.242499 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.242516 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.242533 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.242549 | orchestrator | 2025-05-14 02:39:17.242563 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:39:17.242593 | orchestrator | Wednesday 14 May 2025 02:37:37 +0000 (0:00:00.477) 0:00:34.540 ********* 2025-05-14 02:39:17.242603 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.242613 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.242623 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.242633 | orchestrator | 2025-05-14 02:39:17.242643 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:39:17.242663 | orchestrator | Wednesday 14 May 2025 02:37:38 +0000 (0:00:00.342) 0:00:34.883 ********* 2025-05-14 02:39:17.242680 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:39:17.242696 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.242713 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:39:17.242729 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.242746 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:39:17.242773 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.242783 | orchestrator | 2025-05-14 02:39:17.242793 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:39:17.242803 | orchestrator | Wednesday 14 May 2025 02:37:38 +0000 (0:00:00.460) 0:00:35.343 ********* 2025-05-14 02:39:17.242813 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:39:17.242822 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.242832 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:39:17.242842 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.242852 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:39:17.242861 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.242871 | orchestrator | 2025-05-14 02:39:17.242881 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:39:17.242890 | orchestrator | Wednesday 14 May 2025 02:37:39 +0000 (0:00:00.284) 0:00:35.628 ********* 2025-05-14 02:39:17.242900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:39:17.242909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:39:17.242919 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:39:17.242928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:39:17.242938 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.242947 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:39:17.242957 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:39:17.242966 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:39:17.242975 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.242985 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:39:17.242994 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:39:17.243004 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.243013 | orchestrator | 2025-05-14 02:39:17.243022 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-14 02:39:17.243032 | orchestrator | Wednesday 14 May 2025 02:37:39 +0000 (0:00:00.864) 0:00:36.493 ********* 2025-05-14 02:39:17.243042 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.243051 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.243061 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:39:17.243070 | orchestrator | 2025-05-14 02:39:17.243080 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-14 02:39:17.243089 | orchestrator | Wednesday 14 May 2025 02:37:40 +0000 (0:00:00.291) 0:00:36.784 ********* 2025-05-14 02:39:17.243099 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 02:39:17.243108 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:39:17.243125 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:39:17.243134 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-14 02:39:17.243144 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:39:17.243154 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:39:17.243164 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:39:17.243173 | orchestrator | 2025-05-14 02:39:17.243183 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-14 02:39:17.243193 | orchestrator | Wednesday 14 May 2025 02:37:41 +0000 (0:00:01.289) 0:00:38.074 ********* 2025-05-14 02:39:17.243209 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 02:39:17.243218 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:39:17.243228 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:39:17.243245 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-14 02:39:17.243271 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:39:17.243287 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:39:17.243302 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:39:17.243318 | orchestrator | 2025-05-14 02:39:17.243336 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-14 02:39:17.243352 | orchestrator | Wednesday 14 May 2025 02:37:43 +0000 (0:00:01.930) 0:00:40.005 ********* 2025-05-14 02:39:17.243368 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:39:17.243385 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:39:17.243402 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-14 02:39:17.243417 | orchestrator | 2025-05-14 02:39:17.243431 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-14 02:39:17.243457 | orchestrator | Wednesday 14 May 2025 02:37:43 +0000 (0:00:00.547) 0:00:40.553 ********* 2025-05-14 02:39:17.243477 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:39:17.243495 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:39:17.243512 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:39:17.243523 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:39:17.243533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:39:17.243542 | orchestrator | 2025-05-14 02:39:17.243552 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-14 02:39:17.243561 | orchestrator | Wednesday 14 May 2025 02:38:24 +0000 (0:00:40.345) 0:01:20.898 ********* 2025-05-14 02:39:17.243571 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243612 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243625 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243634 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243644 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243658 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243688 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-14 02:39:17.243704 | orchestrator | 2025-05-14 02:39:17.243719 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-14 02:39:17.243734 | orchestrator | Wednesday 14 May 2025 02:38:44 +0000 (0:00:20.686) 0:01:41.584 ********* 2025-05-14 02:39:17.243749 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243772 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243788 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243803 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243820 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243837 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243853 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 02:39:17.243868 | orchestrator | 2025-05-14 02:39:17.243879 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-14 02:39:17.243888 | orchestrator | Wednesday 14 May 2025 02:38:54 +0000 (0:00:09.985) 0:01:51.570 ********* 2025-05-14 02:39:17.243898 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243907 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:39:17.243917 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:39:17.243927 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243936 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:39:17.243945 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:39:17.243955 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243964 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:39:17.243975 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:39:17.243984 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.243994 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:39:17.244012 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:39:17.244021 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.244031 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:39:17.244041 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:39:17.244050 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:39:17.244060 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:39:17.244069 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:39:17.244079 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-14 02:39:17.244089 | orchestrator | 2025-05-14 02:39:17.244098 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:39:17.244108 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-14 02:39:17.244119 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-14 02:39:17.244129 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-14 02:39:17.244150 | orchestrator | 2025-05-14 02:39:17.244160 | orchestrator | 2025-05-14 02:39:17.244170 | orchestrator | 2025-05-14 02:39:17.244180 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:39:17.244189 | orchestrator | Wednesday 14 May 2025 02:39:14 +0000 (0:00:19.074) 0:02:10.645 ********* 2025-05-14 02:39:17.244198 | orchestrator | =============================================================================== 2025-05-14 02:39:17.244208 | orchestrator | create openstack pool(s) ----------------------------------------------- 40.35s 2025-05-14 02:39:17.244217 | orchestrator | generate keys ---------------------------------------------------------- 20.69s 2025-05-14 02:39:17.244227 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 19.07s 2025-05-14 02:39:17.244236 | orchestrator | get keys from monitors -------------------------------------------------- 9.99s 2025-05-14 02:39:17.244245 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.40s 2025-05-14 02:39:17.244255 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.93s 2025-05-14 02:39:17.244264 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.88s 2025-05-14 02:39:17.244274 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.51s 2025-05-14 02:39:17.244284 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.42s 2025-05-14 02:39:17.244293 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.29s 2025-05-14 02:39:17.244303 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.12s 2025-05-14 02:39:17.244313 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.91s 2025-05-14 02:39:17.244322 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.86s 2025-05-14 02:39:17.244336 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.85s 2025-05-14 02:39:17.244346 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.83s 2025-05-14 02:39:17.244355 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.83s 2025-05-14 02:39:17.244365 | orchestrator | ceph-facts : set_fact _radosgw_address to radosgw_address --------------- 0.83s 2025-05-14 02:39:17.244374 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.77s 2025-05-14 02:39:17.244384 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.75s 2025-05-14 02:39:17.244394 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.72s 2025-05-14 02:39:17.244403 | orchestrator | 2025-05-14 02:39:17 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:17.244413 | orchestrator | 2025-05-14 02:39:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:20.290261 | orchestrator | 2025-05-14 02:39:20 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:20.291563 | orchestrator | 2025-05-14 02:39:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:20.292546 | orchestrator | 2025-05-14 02:39:20 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:20.292754 | orchestrator | 2025-05-14 02:39:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:23.346091 | orchestrator | 2025-05-14 02:39:23 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:23.348473 | orchestrator | 2025-05-14 02:39:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:23.350821 | orchestrator | 2025-05-14 02:39:23 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:23.350857 | orchestrator | 2025-05-14 02:39:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:26.398605 | orchestrator | 2025-05-14 02:39:26 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:26.402232 | orchestrator | 2025-05-14 02:39:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:26.404843 | orchestrator | 2025-05-14 02:39:26 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state STARTED 2025-05-14 02:39:26.408005 | orchestrator | 2025-05-14 02:39:26 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:26.408058 | orchestrator | 2025-05-14 02:39:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:29.458261 | orchestrator | 2025-05-14 02:39:29 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:29.460061 | orchestrator | 2025-05-14 02:39:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:29.461373 | orchestrator | 2025-05-14 02:39:29 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state STARTED 2025-05-14 02:39:29.463191 | orchestrator | 2025-05-14 02:39:29 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:29.463241 | orchestrator | 2025-05-14 02:39:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:32.514720 | orchestrator | 2025-05-14 02:39:32 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:32.515822 | orchestrator | 2025-05-14 02:39:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:32.517348 | orchestrator | 2025-05-14 02:39:32 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state STARTED 2025-05-14 02:39:32.519850 | orchestrator | 2025-05-14 02:39:32 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:32.519901 | orchestrator | 2025-05-14 02:39:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:35.568650 | orchestrator | 2025-05-14 02:39:35 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:35.570616 | orchestrator | 2025-05-14 02:39:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:35.571306 | orchestrator | 2025-05-14 02:39:35 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state STARTED 2025-05-14 02:39:35.572789 | orchestrator | 2025-05-14 02:39:35 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:35.572841 | orchestrator | 2025-05-14 02:39:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:38.625481 | orchestrator | 2025-05-14 02:39:38 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:38.626168 | orchestrator | 2025-05-14 02:39:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:38.627169 | orchestrator | 2025-05-14 02:39:38 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state STARTED 2025-05-14 02:39:38.628364 | orchestrator | 2025-05-14 02:39:38 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:38.628413 | orchestrator | 2025-05-14 02:39:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:41.691135 | orchestrator | 2025-05-14 02:39:41 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:41.691631 | orchestrator | 2025-05-14 02:39:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:41.694465 | orchestrator | 2025-05-14 02:39:41 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state STARTED 2025-05-14 02:39:41.697543 | orchestrator | 2025-05-14 02:39:41 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:41.697624 | orchestrator | 2025-05-14 02:39:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:44.763871 | orchestrator | 2025-05-14 02:39:44 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:44.765314 | orchestrator | 2025-05-14 02:39:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:44.770894 | orchestrator | 2025-05-14 02:39:44 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state STARTED 2025-05-14 02:39:44.773994 | orchestrator | 2025-05-14 02:39:44 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:44.774073 | orchestrator | 2025-05-14 02:39:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:47.822448 | orchestrator | 2025-05-14 02:39:47 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:47.825586 | orchestrator | 2025-05-14 02:39:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:47.827583 | orchestrator | 2025-05-14 02:39:47 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state STARTED 2025-05-14 02:39:47.830614 | orchestrator | 2025-05-14 02:39:47 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:47.830683 | orchestrator | 2025-05-14 02:39:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:50.883097 | orchestrator | 2025-05-14 02:39:50 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state STARTED 2025-05-14 02:39:50.884709 | orchestrator | 2025-05-14 02:39:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:50.885754 | orchestrator | 2025-05-14 02:39:50 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state STARTED 2025-05-14 02:39:50.887070 | orchestrator | 2025-05-14 02:39:50 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:50.887101 | orchestrator | 2025-05-14 02:39:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:53.936688 | orchestrator | 2025-05-14 02:39:53 | INFO  | Task dfc02968-dffd-42bb-9aa4-4382ac0da5f1 is in state SUCCESS 2025-05-14 02:39:53.938300 | orchestrator | 2025-05-14 02:39:53.938448 | orchestrator | 2025-05-14 02:39:53.938477 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:39:53.938501 | orchestrator | 2025-05-14 02:39:53.938523 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:39:53.938542 | orchestrator | Wednesday 14 May 2025 02:37:17 +0000 (0:00:00.335) 0:00:00.335 ********* 2025-05-14 02:39:53.938600 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:53.938622 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:53.938639 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:53.938651 | orchestrator | 2025-05-14 02:39:53.938662 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:39:53.938706 | orchestrator | Wednesday 14 May 2025 02:37:18 +0000 (0:00:00.511) 0:00:00.847 ********* 2025-05-14 02:39:53.938728 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-14 02:39:53.938746 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-14 02:39:53.938764 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-14 02:39:53.938782 | orchestrator | 2025-05-14 02:39:53.938802 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-14 02:39:53.938822 | orchestrator | 2025-05-14 02:39:53.938842 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:39:53.938861 | orchestrator | Wednesday 14 May 2025 02:37:18 +0000 (0:00:00.353) 0:00:01.200 ********* 2025-05-14 02:39:53.938926 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:39:53.938940 | orchestrator | 2025-05-14 02:39:53.938952 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-14 02:39:53.938963 | orchestrator | Wednesday 14 May 2025 02:37:19 +0000 (0:00:00.770) 0:00:01.970 ********* 2025-05-14 02:39:53.939003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.939022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.939079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.939094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939225 | orchestrator | 2025-05-14 02:39:53.939243 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-14 02:39:53.939278 | orchestrator | Wednesday 14 May 2025 02:37:21 +0000 (0:00:02.570) 0:00:04.541 ********* 2025-05-14 02:39:53.939299 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-14 02:39:53.939318 | orchestrator | 2025-05-14 02:39:53.939335 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-14 02:39:53.939347 | orchestrator | Wednesday 14 May 2025 02:37:22 +0000 (0:00:00.579) 0:00:05.120 ********* 2025-05-14 02:39:53.939368 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:53.939381 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:53.939392 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:53.939403 | orchestrator | 2025-05-14 02:39:53.939414 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-14 02:39:53.939425 | orchestrator | Wednesday 14 May 2025 02:37:22 +0000 (0:00:00.353) 0:00:05.473 ********* 2025-05-14 02:39:53.939436 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:39:53.939448 | orchestrator | 2025-05-14 02:39:53.939459 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:39:53.939470 | orchestrator | Wednesday 14 May 2025 02:37:23 +0000 (0:00:00.402) 0:00:05.876 ********* 2025-05-14 02:39:53.939481 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:39:53.939492 | orchestrator | 2025-05-14 02:39:53.939503 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-14 02:39:53.939514 | orchestrator | Wednesday 14 May 2025 02:37:23 +0000 (0:00:00.530) 0:00:06.407 ********* 2025-05-14 02:39:53.939533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.939546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.939600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.939625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.939735 | orchestrator | 2025-05-14 02:39:53.939754 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-14 02:39:53.939772 | orchestrator | Wednesday 14 May 2025 02:37:27 +0000 (0:00:03.492) 0:00:09.899 ********* 2025-05-14 02:39:53.939804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:39:53.939834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.939857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:39:53.939878 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.939898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:39:53.939920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.939957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:39:53.939969 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:53.939987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:39:53.939999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.940011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:39:53.940022 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:53.940033 | orchestrator | 2025-05-14 02:39:53.940045 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-14 02:39:53.940056 | orchestrator | Wednesday 14 May 2025 02:37:28 +0000 (0:00:01.093) 0:00:10.993 ********* 2025-05-14 02:39:53.940067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:39:53.940094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.940106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:39:53.940117 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.940135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:39:53.940147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.940159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:39:53.940183 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:53.940204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:39:53.940216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.940233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:39:53.940245 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:53.940256 | orchestrator | 2025-05-14 02:39:53.940267 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-14 02:39:53.940278 | orchestrator | Wednesday 14 May 2025 02:37:29 +0000 (0:00:01.264) 0:00:12.257 ********* 2025-05-14 02:39:53.940290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.940311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.940331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.940349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.940361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.940373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.940393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.940405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.940423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.940435 | orchestrator | 2025-05-14 02:39:53.940446 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-14 02:39:53.940457 | orchestrator | Wednesday 14 May 2025 02:37:33 +0000 (0:00:04.152) 0:00:16.410 ********* 2025-05-14 02:39:53.940474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.940486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.940507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.940519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.940539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.940588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.940602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.940613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.940632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.940644 | orchestrator | 2025-05-14 02:39:53.940655 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-14 02:39:53.940667 | orchestrator | Wednesday 14 May 2025 02:37:40 +0000 (0:00:07.058) 0:00:23.469 ********* 2025-05-14 02:39:53.940678 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:39:53.940689 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:39:53.940700 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:53.940711 | orchestrator | 2025-05-14 02:39:53.940722 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-14 02:39:53.940741 | orchestrator | Wednesday 14 May 2025 02:37:43 +0000 (0:00:02.605) 0:00:26.074 ********* 2025-05-14 02:39:53.940759 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.940777 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:53.940796 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:53.940814 | orchestrator | 2025-05-14 02:39:53.940843 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-14 02:39:53.940862 | orchestrator | Wednesday 14 May 2025 02:37:44 +0000 (0:00:01.611) 0:00:27.685 ********* 2025-05-14 02:39:53.940880 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.940899 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:53.940911 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:53.940922 | orchestrator | 2025-05-14 02:39:53.940933 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-14 02:39:53.940944 | orchestrator | Wednesday 14 May 2025 02:37:45 +0000 (0:00:00.528) 0:00:28.214 ********* 2025-05-14 02:39:53.940955 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.940967 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:53.940985 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:53.941003 | orchestrator | 2025-05-14 02:39:53.941023 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-14 02:39:53.941042 | orchestrator | Wednesday 14 May 2025 02:37:45 +0000 (0:00:00.402) 0:00:28.616 ********* 2025-05-14 02:39:53.941070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.941103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.941124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.941145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.941178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.941206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:39:53.941228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.941240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.941251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.941262 | orchestrator | 2025-05-14 02:39:53.941273 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:39:53.941284 | orchestrator | Wednesday 14 May 2025 02:37:48 +0000 (0:00:02.436) 0:00:31.052 ********* 2025-05-14 02:39:53.941295 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.941306 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:53.941317 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:53.941328 | orchestrator | 2025-05-14 02:39:53.941338 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-14 02:39:53.941349 | orchestrator | Wednesday 14 May 2025 02:37:48 +0000 (0:00:00.425) 0:00:31.478 ********* 2025-05-14 02:39:53.941360 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-14 02:39:53.941371 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-14 02:39:53.941389 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-14 02:39:53.941400 | orchestrator | 2025-05-14 02:39:53.941411 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-14 02:39:53.941421 | orchestrator | Wednesday 14 May 2025 02:37:50 +0000 (0:00:02.211) 0:00:33.690 ********* 2025-05-14 02:39:53.941432 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:39:53.941443 | orchestrator | 2025-05-14 02:39:53.941454 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-14 02:39:53.941464 | orchestrator | Wednesday 14 May 2025 02:37:51 +0000 (0:00:00.632) 0:00:34.323 ********* 2025-05-14 02:39:53.941475 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.941486 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:53.941505 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:53.941516 | orchestrator | 2025-05-14 02:39:53.941526 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-14 02:39:53.941537 | orchestrator | Wednesday 14 May 2025 02:37:52 +0000 (0:00:01.230) 0:00:35.553 ********* 2025-05-14 02:39:53.941548 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 02:39:53.941587 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:39:53.941600 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 02:39:53.941611 | orchestrator | 2025-05-14 02:39:53.941621 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-14 02:39:53.941632 | orchestrator | Wednesday 14 May 2025 02:37:54 +0000 (0:00:01.211) 0:00:36.765 ********* 2025-05-14 02:39:53.941643 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:53.941654 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:53.941668 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:53.941687 | orchestrator | 2025-05-14 02:39:53.941705 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-14 02:39:53.941730 | orchestrator | Wednesday 14 May 2025 02:37:54 +0000 (0:00:00.376) 0:00:37.142 ********* 2025-05-14 02:39:53.941749 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-14 02:39:53.941766 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-14 02:39:53.941784 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-14 02:39:53.941803 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-14 02:39:53.941821 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-14 02:39:53.941840 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-14 02:39:53.941859 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-14 02:39:53.941879 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-14 02:39:53.941897 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-14 02:39:53.941916 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-14 02:39:53.941935 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-14 02:39:53.941953 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-14 02:39:53.941969 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-14 02:39:53.941980 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-14 02:39:53.941991 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-14 02:39:53.942002 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:39:53.942013 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:39:53.942088 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:39:53.942100 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:39:53.942120 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:39:53.942140 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:39:53.942158 | orchestrator | 2025-05-14 02:39:53.942175 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-14 02:39:53.942193 | orchestrator | Wednesday 14 May 2025 02:38:05 +0000 (0:00:11.122) 0:00:48.264 ********* 2025-05-14 02:39:53.942232 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:39:53.942253 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:39:53.942272 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:39:53.942291 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:39:53.942310 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:39:53.942341 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:39:53.942362 | orchestrator | 2025-05-14 02:39:53.942380 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-14 02:39:53.942399 | orchestrator | Wednesday 14 May 2025 02:38:08 +0000 (0:00:03.291) 0:00:51.555 ********* 2025-05-14 02:39:53.942420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.942451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.942466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:39:53.942488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.942512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.942524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:39:53.942541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.942553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.942592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:39:53.942621 | orchestrator | 2025-05-14 02:39:53.942642 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:39:53.942661 | orchestrator | Wednesday 14 May 2025 02:38:11 +0000 (0:00:02.887) 0:00:54.443 ********* 2025-05-14 02:39:53.942679 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.942699 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:53.942717 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:53.942735 | orchestrator | 2025-05-14 02:39:53.942754 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-14 02:39:53.942772 | orchestrator | Wednesday 14 May 2025 02:38:11 +0000 (0:00:00.262) 0:00:54.705 ********* 2025-05-14 02:39:53.942790 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:53.942809 | orchestrator | 2025-05-14 02:39:53.942827 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-14 02:39:53.942846 | orchestrator | Wednesday 14 May 2025 02:38:14 +0000 (0:00:02.681) 0:00:57.386 ********* 2025-05-14 02:39:53.942865 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:53.942883 | orchestrator | 2025-05-14 02:39:53.942903 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-14 02:39:53.942921 | orchestrator | Wednesday 14 May 2025 02:38:16 +0000 (0:00:02.230) 0:00:59.616 ********* 2025-05-14 02:39:53.942939 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:53.942951 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:53.942962 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:53.942973 | orchestrator | 2025-05-14 02:39:53.942983 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-14 02:39:53.942994 | orchestrator | Wednesday 14 May 2025 02:38:17 +0000 (0:00:01.050) 0:01:00.666 ********* 2025-05-14 02:39:53.943005 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:53.943025 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:53.943037 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:53.943048 | orchestrator | 2025-05-14 02:39:53.943059 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-14 02:39:53.943070 | orchestrator | Wednesday 14 May 2025 02:38:18 +0000 (0:00:00.340) 0:01:01.007 ********* 2025-05-14 02:39:53.943080 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.943091 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:53.943102 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:53.943113 | orchestrator | 2025-05-14 02:39:53.943123 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-14 02:39:53.943134 | orchestrator | Wednesday 14 May 2025 02:38:18 +0000 (0:00:00.372) 0:01:01.379 ********* 2025-05-14 02:39:53.943145 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:53.943156 | orchestrator | 2025-05-14 02:39:53.943167 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-14 02:39:53.943178 | orchestrator | Wednesday 14 May 2025 02:38:31 +0000 (0:00:13.002) 0:01:14.382 ********* 2025-05-14 02:39:53.943188 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:53.943199 | orchestrator | 2025-05-14 02:39:53.943210 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-14 02:39:53.943221 | orchestrator | Wednesday 14 May 2025 02:38:40 +0000 (0:00:09.164) 0:01:23.546 ********* 2025-05-14 02:39:53.943234 | orchestrator | 2025-05-14 02:39:53.943252 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-14 02:39:53.943272 | orchestrator | Wednesday 14 May 2025 02:38:40 +0000 (0:00:00.049) 0:01:23.596 ********* 2025-05-14 02:39:53.943290 | orchestrator | 2025-05-14 02:39:53.943309 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-14 02:39:53.943327 | orchestrator | Wednesday 14 May 2025 02:38:40 +0000 (0:00:00.049) 0:01:23.645 ********* 2025-05-14 02:39:53.943345 | orchestrator | 2025-05-14 02:39:53.943372 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-14 02:39:53.943392 | orchestrator | Wednesday 14 May 2025 02:38:40 +0000 (0:00:00.052) 0:01:23.697 ********* 2025-05-14 02:39:53.943412 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:53.943443 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:39:53.943461 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:39:53.943479 | orchestrator | 2025-05-14 02:39:53.943498 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-14 02:39:53.943515 | orchestrator | Wednesday 14 May 2025 02:38:51 +0000 (0:00:10.071) 0:01:33.769 ********* 2025-05-14 02:39:53.943534 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:53.943553 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:39:53.943649 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:39:53.943669 | orchestrator | 2025-05-14 02:39:53.943687 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-14 02:39:53.943705 | orchestrator | Wednesday 14 May 2025 02:38:56 +0000 (0:00:05.658) 0:01:39.427 ********* 2025-05-14 02:39:53.943717 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:53.943732 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:39:53.943751 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:39:53.943769 | orchestrator | 2025-05-14 02:39:53.943787 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:39:53.943805 | orchestrator | Wednesday 14 May 2025 02:39:07 +0000 (0:00:10.552) 0:01:49.980 ********* 2025-05-14 02:39:53.943824 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:39:53.943842 | orchestrator | 2025-05-14 02:39:53.943858 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-14 02:39:53.943870 | orchestrator | Wednesday 14 May 2025 02:39:08 +0000 (0:00:00.795) 0:01:50.776 ********* 2025-05-14 02:39:53.943881 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:53.943892 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:39:53.943902 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:39:53.943913 | orchestrator | 2025-05-14 02:39:53.943924 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-14 02:39:53.943935 | orchestrator | Wednesday 14 May 2025 02:39:09 +0000 (0:00:01.039) 0:01:51.816 ********* 2025-05-14 02:39:53.943946 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:39:53.943956 | orchestrator | 2025-05-14 02:39:53.943967 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-14 02:39:53.943978 | orchestrator | Wednesday 14 May 2025 02:39:10 +0000 (0:00:01.469) 0:01:53.286 ********* 2025-05-14 02:39:53.943989 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-14 02:39:53.943999 | orchestrator | 2025-05-14 02:39:53.944010 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-14 02:39:53.944021 | orchestrator | Wednesday 14 May 2025 02:39:20 +0000 (0:00:09.900) 0:02:03.186 ********* 2025-05-14 02:39:53.944032 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-14 02:39:53.944043 | orchestrator | 2025-05-14 02:39:53.944053 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-14 02:39:53.944064 | orchestrator | Wednesday 14 May 2025 02:39:41 +0000 (0:00:20.590) 0:02:23.777 ********* 2025-05-14 02:39:53.944075 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-14 02:39:53.944086 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-14 02:39:53.944097 | orchestrator | 2025-05-14 02:39:53.944108 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-14 02:39:53.944119 | orchestrator | Wednesday 14 May 2025 02:39:48 +0000 (0:00:07.283) 0:02:31.061 ********* 2025-05-14 02:39:53.944130 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.944140 | orchestrator | 2025-05-14 02:39:53.944149 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-14 02:39:53.944159 | orchestrator | Wednesday 14 May 2025 02:39:48 +0000 (0:00:00.131) 0:02:31.192 ********* 2025-05-14 02:39:53.944169 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.944178 | orchestrator | 2025-05-14 02:39:53.944188 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-14 02:39:53.944490 | orchestrator | Wednesday 14 May 2025 02:39:48 +0000 (0:00:00.121) 0:02:31.313 ********* 2025-05-14 02:39:53.944508 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.944518 | orchestrator | 2025-05-14 02:39:53.944528 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-14 02:39:53.944538 | orchestrator | Wednesday 14 May 2025 02:39:48 +0000 (0:00:00.119) 0:02:31.432 ********* 2025-05-14 02:39:53.944548 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.944584 | orchestrator | 2025-05-14 02:39:53.944599 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-14 02:39:53.944608 | orchestrator | Wednesday 14 May 2025 02:39:49 +0000 (0:00:00.416) 0:02:31.849 ********* 2025-05-14 02:39:53.944618 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:53.944628 | orchestrator | 2025-05-14 02:39:53.944637 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:39:53.944647 | orchestrator | Wednesday 14 May 2025 02:39:52 +0000 (0:00:03.467) 0:02:35.317 ********* 2025-05-14 02:39:53.944656 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:53.944666 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:39:53.944675 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:39:53.944685 | orchestrator | 2025-05-14 02:39:53.944695 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:39:53.944705 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 02:39:53.944716 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-14 02:39:53.944735 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-14 02:39:53.944745 | orchestrator | 2025-05-14 02:39:53.944754 | orchestrator | 2025-05-14 02:39:53.944764 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:39:53.944774 | orchestrator | Wednesday 14 May 2025 02:39:53 +0000 (0:00:00.544) 0:02:35.861 ********* 2025-05-14 02:39:53.944783 | orchestrator | =============================================================================== 2025-05-14 02:39:53.944793 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.59s 2025-05-14 02:39:53.944802 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.00s 2025-05-14 02:39:53.944812 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 11.12s 2025-05-14 02:39:53.944821 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.55s 2025-05-14 02:39:53.944831 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.07s 2025-05-14 02:39:53.944840 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.90s 2025-05-14 02:39:53.944850 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.16s 2025-05-14 02:39:53.944859 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.28s 2025-05-14 02:39:53.944868 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.06s 2025-05-14 02:39:53.944878 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.66s 2025-05-14 02:39:53.944887 | orchestrator | keystone : Copying over config.json files for services ------------------ 4.15s 2025-05-14 02:39:53.944897 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.49s 2025-05-14 02:39:53.944906 | orchestrator | keystone : Creating default user role ----------------------------------- 3.47s 2025-05-14 02:39:53.944916 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.29s 2025-05-14 02:39:53.944925 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.89s 2025-05-14 02:39:53.944935 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.68s 2025-05-14 02:39:53.944953 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.61s 2025-05-14 02:39:53.944963 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.57s 2025-05-14 02:39:53.944975 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.44s 2025-05-14 02:39:53.944991 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.23s 2025-05-14 02:39:53.945008 | orchestrator | 2025-05-14 02:39:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:53.945024 | orchestrator | 2025-05-14 02:39:53 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state STARTED 2025-05-14 02:39:53.945040 | orchestrator | 2025-05-14 02:39:53 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:53.945057 | orchestrator | 2025-05-14 02:39:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:56.965760 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:39:56.965815 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:39:56.966395 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:39:56.967142 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:39:56.968082 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task 3f262b2d-ea33-4e4c-a3d9-f647dade75f0 is in state SUCCESS 2025-05-14 02:39:56.970872 | orchestrator | 2025-05-14 02:39:56.970895 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:39:56.970903 | orchestrator | 2025-05-14 02:39:56.970910 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-14 02:39:56.970925 | orchestrator | 2025-05-14 02:39:56.970936 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-14 02:39:56.970948 | orchestrator | Wednesday 14 May 2025 02:39:26 +0000 (0:00:00.456) 0:00:00.456 ********* 2025-05-14 02:39:56.970958 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-14 02:39:56.970971 | orchestrator | 2025-05-14 02:39:56.970982 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-14 02:39:56.970994 | orchestrator | Wednesday 14 May 2025 02:39:26 +0000 (0:00:00.211) 0:00:00.667 ********* 2025-05-14 02:39:56.971006 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:39:56.971019 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 02:39:56.971031 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 02:39:56.971042 | orchestrator | 2025-05-14 02:39:56.971054 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-14 02:39:56.971066 | orchestrator | Wednesday 14 May 2025 02:39:27 +0000 (0:00:00.870) 0:00:01.537 ********* 2025-05-14 02:39:56.971078 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-14 02:39:56.971089 | orchestrator | 2025-05-14 02:39:56.971119 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-14 02:39:56.971131 | orchestrator | Wednesday 14 May 2025 02:39:27 +0000 (0:00:00.216) 0:00:01.754 ********* 2025-05-14 02:39:56.971142 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.971149 | orchestrator | 2025-05-14 02:39:56.971156 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-14 02:39:56.971163 | orchestrator | Wednesday 14 May 2025 02:39:28 +0000 (0:00:00.632) 0:00:02.387 ********* 2025-05-14 02:39:56.971169 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.971176 | orchestrator | 2025-05-14 02:39:56.971183 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-14 02:39:56.971215 | orchestrator | Wednesday 14 May 2025 02:39:28 +0000 (0:00:00.143) 0:00:02.531 ********* 2025-05-14 02:39:56.971222 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.971229 | orchestrator | 2025-05-14 02:39:56.971236 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-14 02:39:56.971242 | orchestrator | Wednesday 14 May 2025 02:39:29 +0000 (0:00:00.476) 0:00:03.008 ********* 2025-05-14 02:39:56.971249 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.971256 | orchestrator | 2025-05-14 02:39:56.971263 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-14 02:39:56.971269 | orchestrator | Wednesday 14 May 2025 02:39:29 +0000 (0:00:00.131) 0:00:03.140 ********* 2025-05-14 02:39:56.971276 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.971283 | orchestrator | 2025-05-14 02:39:56.971289 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-14 02:39:56.971296 | orchestrator | Wednesday 14 May 2025 02:39:29 +0000 (0:00:00.145) 0:00:03.285 ********* 2025-05-14 02:39:56.971303 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.971309 | orchestrator | 2025-05-14 02:39:56.971316 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-14 02:39:56.971323 | orchestrator | Wednesday 14 May 2025 02:39:29 +0000 (0:00:00.133) 0:00:03.419 ********* 2025-05-14 02:39:56.971330 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.971338 | orchestrator | 2025-05-14 02:39:56.971345 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-14 02:39:56.971351 | orchestrator | Wednesday 14 May 2025 02:39:29 +0000 (0:00:00.156) 0:00:03.576 ********* 2025-05-14 02:39:56.971358 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.971365 | orchestrator | 2025-05-14 02:39:56.971372 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-14 02:39:56.971379 | orchestrator | Wednesday 14 May 2025 02:39:29 +0000 (0:00:00.144) 0:00:03.721 ********* 2025-05-14 02:39:56.971388 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:39:56.971400 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:39:56.971411 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:39:56.971422 | orchestrator | 2025-05-14 02:39:56.971433 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-14 02:39:56.971444 | orchestrator | Wednesday 14 May 2025 02:39:30 +0000 (0:00:00.886) 0:00:04.608 ********* 2025-05-14 02:39:56.971455 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.971466 | orchestrator | 2025-05-14 02:39:56.971477 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-14 02:39:56.971489 | orchestrator | Wednesday 14 May 2025 02:39:31 +0000 (0:00:00.268) 0:00:04.876 ********* 2025-05-14 02:39:56.971502 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:39:56.971514 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:39:56.971527 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:39:56.971539 | orchestrator | 2025-05-14 02:39:56.971550 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-14 02:39:56.971583 | orchestrator | Wednesday 14 May 2025 02:39:33 +0000 (0:00:02.003) 0:00:06.879 ********* 2025-05-14 02:39:56.971592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:39:56.971600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:39:56.971608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:39:56.971615 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.971623 | orchestrator | 2025-05-14 02:39:56.971631 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-14 02:39:56.971649 | orchestrator | Wednesday 14 May 2025 02:39:33 +0000 (0:00:00.430) 0:00:07.310 ********* 2025-05-14 02:39:56.971668 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 02:39:56.971678 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 02:39:56.971685 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 02:39:56.971691 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.971698 | orchestrator | 2025-05-14 02:39:56.971705 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-14 02:39:56.971712 | orchestrator | Wednesday 14 May 2025 02:39:34 +0000 (0:00:00.747) 0:00:08.058 ********* 2025-05-14 02:39:56.971726 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:39:56.971738 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:39:56.971745 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:39:56.971752 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.971759 | orchestrator | 2025-05-14 02:39:56.971766 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-14 02:39:56.971772 | orchestrator | Wednesday 14 May 2025 02:39:34 +0000 (0:00:00.166) 0:00:08.224 ********* 2025-05-14 02:39:56.971782 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '45a4e245ab61', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-14 02:39:31.741659', 'end': '2025-05-14 02:39:31.782936', 'delta': '0:00:00.041277', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['45a4e245ab61'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-14 02:39:56.971793 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '42a03557f02e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-14 02:39:32.383303', 'end': '2025-05-14 02:39:32.439320', 'delta': '0:00:00.056017', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['42a03557f02e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-14 02:39:56.971813 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '46a6c4ca095b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-14 02:39:32.943231', 'end': '2025-05-14 02:39:32.981910', 'delta': '0:00:00.038679', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['46a6c4ca095b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-14 02:39:56.971821 | orchestrator | 2025-05-14 02:39:56.971828 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-14 02:39:56.971835 | orchestrator | Wednesday 14 May 2025 02:39:34 +0000 (0:00:00.214) 0:00:08.439 ********* 2025-05-14 02:39:56.971842 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.971849 | orchestrator | 2025-05-14 02:39:56.971855 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-14 02:39:56.971862 | orchestrator | Wednesday 14 May 2025 02:39:34 +0000 (0:00:00.257) 0:00:08.696 ********* 2025-05-14 02:39:56.971869 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-14 02:39:56.971876 | orchestrator | 2025-05-14 02:39:56.971882 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-14 02:39:56.971893 | orchestrator | Wednesday 14 May 2025 02:39:36 +0000 (0:00:01.545) 0:00:10.242 ********* 2025-05-14 02:39:56.971900 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.971906 | orchestrator | 2025-05-14 02:39:56.971913 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-14 02:39:56.971919 | orchestrator | Wednesday 14 May 2025 02:39:36 +0000 (0:00:00.138) 0:00:10.381 ********* 2025-05-14 02:39:56.971926 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.971933 | orchestrator | 2025-05-14 02:39:56.971939 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:39:56.971946 | orchestrator | Wednesday 14 May 2025 02:39:36 +0000 (0:00:00.207) 0:00:10.589 ********* 2025-05-14 02:39:56.971952 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.971959 | orchestrator | 2025-05-14 02:39:56.971966 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-14 02:39:56.971973 | orchestrator | Wednesday 14 May 2025 02:39:36 +0000 (0:00:00.141) 0:00:10.730 ********* 2025-05-14 02:39:56.971979 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.971986 | orchestrator | 2025-05-14 02:39:56.971993 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-14 02:39:56.971999 | orchestrator | Wednesday 14 May 2025 02:39:37 +0000 (0:00:00.142) 0:00:10.872 ********* 2025-05-14 02:39:56.972006 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972012 | orchestrator | 2025-05-14 02:39:56.972019 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:39:56.972026 | orchestrator | Wednesday 14 May 2025 02:39:37 +0000 (0:00:00.231) 0:00:11.104 ********* 2025-05-14 02:39:56.972032 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972039 | orchestrator | 2025-05-14 02:39:56.972045 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-14 02:39:56.972052 | orchestrator | Wednesday 14 May 2025 02:39:37 +0000 (0:00:00.113) 0:00:11.217 ********* 2025-05-14 02:39:56.972058 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972065 | orchestrator | 2025-05-14 02:39:56.972071 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-14 02:39:56.972078 | orchestrator | Wednesday 14 May 2025 02:39:37 +0000 (0:00:00.144) 0:00:11.361 ********* 2025-05-14 02:39:56.972091 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972098 | orchestrator | 2025-05-14 02:39:56.972105 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-14 02:39:56.972111 | orchestrator | Wednesday 14 May 2025 02:39:37 +0000 (0:00:00.134) 0:00:11.496 ********* 2025-05-14 02:39:56.972118 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972125 | orchestrator | 2025-05-14 02:39:56.972131 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-14 02:39:56.972138 | orchestrator | Wednesday 14 May 2025 02:39:37 +0000 (0:00:00.136) 0:00:11.632 ********* 2025-05-14 02:39:56.972144 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972151 | orchestrator | 2025-05-14 02:39:56.972157 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-14 02:39:56.972164 | orchestrator | Wednesday 14 May 2025 02:39:38 +0000 (0:00:00.141) 0:00:11.773 ********* 2025-05-14 02:39:56.972170 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972177 | orchestrator | 2025-05-14 02:39:56.972183 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-14 02:39:56.972190 | orchestrator | Wednesday 14 May 2025 02:39:38 +0000 (0:00:00.310) 0:00:12.084 ********* 2025-05-14 02:39:56.972197 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972203 | orchestrator | 2025-05-14 02:39:56.972210 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-14 02:39:56.972216 | orchestrator | Wednesday 14 May 2025 02:39:38 +0000 (0:00:00.132) 0:00:12.216 ********* 2025-05-14 02:39:56.972223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:56.972236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:56.972243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:56.972250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:56.972258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:56.972264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:56.972373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:56.972388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:39:56.972406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008', 'scsi-SQEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part1', 'scsi-SQEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part14', 'scsi-SQEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part15', 'scsi-SQEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part16', 'scsi-SQEMU_QEMU_HARDDISK_52375a6f-eba6-4d12-851a-4fdfc6d8b008-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:56.972421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-40-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:39:56.972429 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972436 | orchestrator | 2025-05-14 02:39:56.972443 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-14 02:39:56.972450 | orchestrator | Wednesday 14 May 2025 02:39:38 +0000 (0:00:00.252) 0:00:12.469 ********* 2025-05-14 02:39:56.972464 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972471 | orchestrator | 2025-05-14 02:39:56.972477 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-14 02:39:56.972484 | orchestrator | Wednesday 14 May 2025 02:39:38 +0000 (0:00:00.245) 0:00:12.715 ********* 2025-05-14 02:39:56.972491 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972497 | orchestrator | 2025-05-14 02:39:56.972504 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-14 02:39:56.972511 | orchestrator | Wednesday 14 May 2025 02:39:39 +0000 (0:00:00.145) 0:00:12.860 ********* 2025-05-14 02:39:56.972517 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972524 | orchestrator | 2025-05-14 02:39:56.972530 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-14 02:39:56.972537 | orchestrator | Wednesday 14 May 2025 02:39:39 +0000 (0:00:00.139) 0:00:13.000 ********* 2025-05-14 02:39:56.972544 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.972550 | orchestrator | 2025-05-14 02:39:56.972575 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-14 02:39:56.972582 | orchestrator | Wednesday 14 May 2025 02:39:39 +0000 (0:00:00.539) 0:00:13.540 ********* 2025-05-14 02:39:56.972588 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.972595 | orchestrator | 2025-05-14 02:39:56.972602 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:39:56.972609 | orchestrator | Wednesday 14 May 2025 02:39:39 +0000 (0:00:00.134) 0:00:13.675 ********* 2025-05-14 02:39:56.972615 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.972622 | orchestrator | 2025-05-14 02:39:56.972629 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:39:56.972635 | orchestrator | Wednesday 14 May 2025 02:39:40 +0000 (0:00:00.499) 0:00:14.175 ********* 2025-05-14 02:39:56.972642 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.972648 | orchestrator | 2025-05-14 02:39:56.972655 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:39:56.972662 | orchestrator | Wednesday 14 May 2025 02:39:40 +0000 (0:00:00.170) 0:00:14.346 ********* 2025-05-14 02:39:56.972669 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972675 | orchestrator | 2025-05-14 02:39:56.972682 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:39:56.972689 | orchestrator | Wednesday 14 May 2025 02:39:41 +0000 (0:00:00.634) 0:00:14.980 ********* 2025-05-14 02:39:56.972695 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972702 | orchestrator | 2025-05-14 02:39:56.972708 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-14 02:39:56.972715 | orchestrator | Wednesday 14 May 2025 02:39:41 +0000 (0:00:00.153) 0:00:15.134 ********* 2025-05-14 02:39:56.972721 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:39:56.972728 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:39:56.972735 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:39:56.972742 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972748 | orchestrator | 2025-05-14 02:39:56.972755 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-14 02:39:56.972761 | orchestrator | Wednesday 14 May 2025 02:39:41 +0000 (0:00:00.436) 0:00:15.570 ********* 2025-05-14 02:39:56.972768 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:39:56.972775 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:39:56.972781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:39:56.972788 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972795 | orchestrator | 2025-05-14 02:39:56.972806 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-14 02:39:56.972813 | orchestrator | Wednesday 14 May 2025 02:39:42 +0000 (0:00:00.474) 0:00:16.045 ********* 2025-05-14 02:39:56.972825 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:39:56.972832 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 02:39:56.972839 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 02:39:56.972845 | orchestrator | 2025-05-14 02:39:56.972852 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-14 02:39:56.972858 | orchestrator | Wednesday 14 May 2025 02:39:43 +0000 (0:00:01.117) 0:00:17.163 ********* 2025-05-14 02:39:56.972865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:39:56.972871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:39:56.972878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:39:56.972885 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972891 | orchestrator | 2025-05-14 02:39:56.972898 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-14 02:39:56.972904 | orchestrator | Wednesday 14 May 2025 02:39:43 +0000 (0:00:00.207) 0:00:17.371 ********* 2025-05-14 02:39:56.972911 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:39:56.972918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:39:56.972924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:39:56.972931 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.972937 | orchestrator | 2025-05-14 02:39:56.972947 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-14 02:39:56.972954 | orchestrator | Wednesday 14 May 2025 02:39:43 +0000 (0:00:00.206) 0:00:17.577 ********* 2025-05-14 02:39:56.972961 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-14 02:39:56.972968 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:39:56.972976 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:39:56.972982 | orchestrator | 2025-05-14 02:39:56.972989 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-14 02:39:56.972995 | orchestrator | Wednesday 14 May 2025 02:39:44 +0000 (0:00:00.213) 0:00:17.790 ********* 2025-05-14 02:39:56.973002 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.973008 | orchestrator | 2025-05-14 02:39:56.973015 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-14 02:39:56.973022 | orchestrator | Wednesday 14 May 2025 02:39:44 +0000 (0:00:00.141) 0:00:17.931 ********* 2025-05-14 02:39:56.973028 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:39:56.973035 | orchestrator | 2025-05-14 02:39:56.973041 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-14 02:39:56.973048 | orchestrator | Wednesday 14 May 2025 02:39:44 +0000 (0:00:00.141) 0:00:18.073 ********* 2025-05-14 02:39:56.973054 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:39:56.973061 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:39:56.973068 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:39:56.973074 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 02:39:56.973081 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:39:56.973087 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:39:56.973094 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:39:56.973101 | orchestrator | 2025-05-14 02:39:56.973107 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-14 02:39:56.973114 | orchestrator | Wednesday 14 May 2025 02:39:45 +0000 (0:00:01.239) 0:00:19.313 ********* 2025-05-14 02:39:56.973120 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:39:56.973132 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:39:56.973139 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:39:56.973145 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 02:39:56.973152 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:39:56.973158 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:39:56.973165 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:39:56.973171 | orchestrator | 2025-05-14 02:39:56.973178 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-14 02:39:56.973185 | orchestrator | Wednesday 14 May 2025 02:39:47 +0000 (0:00:01.503) 0:00:20.816 ********* 2025-05-14 02:39:56.973191 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:39:56.973198 | orchestrator | 2025-05-14 02:39:56.973204 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-14 02:39:56.973211 | orchestrator | Wednesday 14 May 2025 02:39:47 +0000 (0:00:00.477) 0:00:21.293 ********* 2025-05-14 02:39:56.973218 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:39:56.973225 | orchestrator | 2025-05-14 02:39:56.973231 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-14 02:39:56.973239 | orchestrator | Wednesday 14 May 2025 02:39:48 +0000 (0:00:00.596) 0:00:21.890 ********* 2025-05-14 02:39:56.973249 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-14 02:39:56.973256 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-14 02:39:56.973263 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-14 02:39:56.973269 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-14 02:39:56.973276 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-14 02:39:56.973282 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-14 02:39:56.973289 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-14 02:39:56.973296 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-14 02:39:56.973302 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-14 02:39:56.973309 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-14 02:39:56.973315 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-14 02:39:56.973322 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-14 02:39:56.973328 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-14 02:39:56.973340 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-14 02:39:56.973346 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-14 02:39:56.973353 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-14 02:39:56.973359 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-14 02:39:56.973366 | orchestrator | 2025-05-14 02:39:56.973373 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:39:56.973379 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-14 02:39:56.973388 | orchestrator | 2025-05-14 02:39:56.973395 | orchestrator | 2025-05-14 02:39:56.973401 | orchestrator | 2025-05-14 02:39:56.973408 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:39:56.973420 | orchestrator | Wednesday 14 May 2025 02:39:54 +0000 (0:00:06.517) 0:00:28.407 ********* 2025-05-14 02:39:56.973427 | orchestrator | =============================================================================== 2025-05-14 02:39:56.973433 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.52s 2025-05-14 02:39:56.973440 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.00s 2025-05-14 02:39:56.973446 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.55s 2025-05-14 02:39:56.973453 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.50s 2025-05-14 02:39:56.973460 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.24s 2025-05-14 02:39:56.973467 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.12s 2025-05-14 02:39:56.973473 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.89s 2025-05-14 02:39:56.973480 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.87s 2025-05-14 02:39:56.973487 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.75s 2025-05-14 02:39:56.973493 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.63s 2025-05-14 02:39:56.973500 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.63s 2025-05-14 02:39:56.973506 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.60s 2025-05-14 02:39:56.973513 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.54s 2025-05-14 02:39:56.973519 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.50s 2025-05-14 02:39:56.973526 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.48s 2025-05-14 02:39:56.973533 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.48s 2025-05-14 02:39:56.973539 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.47s 2025-05-14 02:39:56.973546 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.44s 2025-05-14 02:39:56.973562 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.43s 2025-05-14 02:39:56.973570 | orchestrator | ceph-facts : resolve bluestore_wal_device link(s) ----------------------- 0.31s 2025-05-14 02:39:56.973576 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:39:56.973583 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state STARTED 2025-05-14 02:39:56.973590 | orchestrator | 2025-05-14 02:39:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:00.001694 | orchestrator | 2025-05-14 02:39:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:00.002305 | orchestrator | 2025-05-14 02:40:00 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:00.006432 | orchestrator | 2025-05-14 02:40:00 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:00.008470 | orchestrator | 2025-05-14 02:40:00 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:00.011804 | orchestrator | 2025-05-14 02:40:00 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:00.013112 | orchestrator | 2025-05-14 02:40:00 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:00.014300 | orchestrator | 2025-05-14 02:40:00 | INFO  | Task 0fc3fb37-e6b8-4542-9f99-497fa4ca1831 is in state SUCCESS 2025-05-14 02:40:00.014456 | orchestrator | 2025-05-14 02:40:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:03.054103 | orchestrator | 2025-05-14 02:40:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:03.056663 | orchestrator | 2025-05-14 02:40:03 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:03.059387 | orchestrator | 2025-05-14 02:40:03 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:03.061594 | orchestrator | 2025-05-14 02:40:03 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:03.063389 | orchestrator | 2025-05-14 02:40:03 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:03.065307 | orchestrator | 2025-05-14 02:40:03 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:03.065356 | orchestrator | 2025-05-14 02:40:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:06.108821 | orchestrator | 2025-05-14 02:40:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:06.109058 | orchestrator | 2025-05-14 02:40:06 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:06.109859 | orchestrator | 2025-05-14 02:40:06 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:06.111073 | orchestrator | 2025-05-14 02:40:06 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:06.111943 | orchestrator | 2025-05-14 02:40:06 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:06.113196 | orchestrator | 2025-05-14 02:40:06 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:06.113282 | orchestrator | 2025-05-14 02:40:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:09.167344 | orchestrator | 2025-05-14 02:40:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:09.169993 | orchestrator | 2025-05-14 02:40:09 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:09.172005 | orchestrator | 2025-05-14 02:40:09 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:09.175313 | orchestrator | 2025-05-14 02:40:09 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:09.176492 | orchestrator | 2025-05-14 02:40:09 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:09.178082 | orchestrator | 2025-05-14 02:40:09 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:09.178437 | orchestrator | 2025-05-14 02:40:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:12.228494 | orchestrator | 2025-05-14 02:40:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:12.230173 | orchestrator | 2025-05-14 02:40:12 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:12.240115 | orchestrator | 2025-05-14 02:40:12 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:12.244528 | orchestrator | 2025-05-14 02:40:12 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:12.246211 | orchestrator | 2025-05-14 02:40:12 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:12.248135 | orchestrator | 2025-05-14 02:40:12 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:12.248165 | orchestrator | 2025-05-14 02:40:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:15.297111 | orchestrator | 2025-05-14 02:40:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:15.301121 | orchestrator | 2025-05-14 02:40:15 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:15.301190 | orchestrator | 2025-05-14 02:40:15 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:15.301204 | orchestrator | 2025-05-14 02:40:15 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:15.302234 | orchestrator | 2025-05-14 02:40:15 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:15.303111 | orchestrator | 2025-05-14 02:40:15 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:15.303185 | orchestrator | 2025-05-14 02:40:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:18.354805 | orchestrator | 2025-05-14 02:40:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:18.356180 | orchestrator | 2025-05-14 02:40:18 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:18.356217 | orchestrator | 2025-05-14 02:40:18 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:18.357328 | orchestrator | 2025-05-14 02:40:18 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:18.358791 | orchestrator | 2025-05-14 02:40:18 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:18.360142 | orchestrator | 2025-05-14 02:40:18 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:18.360179 | orchestrator | 2025-05-14 02:40:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:21.408923 | orchestrator | 2025-05-14 02:40:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:21.409988 | orchestrator | 2025-05-14 02:40:21 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:21.411340 | orchestrator | 2025-05-14 02:40:21 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:21.412715 | orchestrator | 2025-05-14 02:40:21 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:21.413748 | orchestrator | 2025-05-14 02:40:21 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:21.415948 | orchestrator | 2025-05-14 02:40:21 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:21.416028 | orchestrator | 2025-05-14 02:40:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:24.457052 | orchestrator | 2025-05-14 02:40:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:24.458430 | orchestrator | 2025-05-14 02:40:24 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:24.458668 | orchestrator | 2025-05-14 02:40:24 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:24.461117 | orchestrator | 2025-05-14 02:40:24 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:24.461174 | orchestrator | 2025-05-14 02:40:24 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:24.462483 | orchestrator | 2025-05-14 02:40:24 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:24.462534 | orchestrator | 2025-05-14 02:40:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:27.501673 | orchestrator | 2025-05-14 02:40:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:27.502450 | orchestrator | 2025-05-14 02:40:27 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:27.503607 | orchestrator | 2025-05-14 02:40:27 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:27.504874 | orchestrator | 2025-05-14 02:40:27 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:27.506439 | orchestrator | 2025-05-14 02:40:27 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:27.507497 | orchestrator | 2025-05-14 02:40:27 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:27.507774 | orchestrator | 2025-05-14 02:40:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:30.565120 | orchestrator | 2025-05-14 02:40:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:30.565240 | orchestrator | 2025-05-14 02:40:30 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state STARTED 2025-05-14 02:40:30.566491 | orchestrator | 2025-05-14 02:40:30 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:30.566940 | orchestrator | 2025-05-14 02:40:30 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:30.569232 | orchestrator | 2025-05-14 02:40:30 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:30.570850 | orchestrator | 2025-05-14 02:40:30 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:30.570895 | orchestrator | 2025-05-14 02:40:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:33.609155 | orchestrator | 2025-05-14 02:40:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:33.609264 | orchestrator | 2025-05-14 02:40:33 | INFO  | Task bf91bccc-8c6f-42f7-b11e-8baaeb5b271f is in state SUCCESS 2025-05-14 02:40:33.609303 | orchestrator | 2025-05-14 02:40:33.609426 | orchestrator | 2025-05-14 02:40:33.609501 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-14 02:40:33.609521 | orchestrator | 2025-05-14 02:40:33.609616 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-14 02:40:33.609639 | orchestrator | Wednesday 14 May 2025 02:39:17 +0000 (0:00:00.142) 0:00:00.142 ********* 2025-05-14 02:40:33.609657 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-14 02:40:33.609675 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:40:33.609694 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:40:33.609712 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 02:40:33.609729 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:40:33.609745 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-14 02:40:33.609762 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-14 02:40:33.609778 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-14 02:40:33.609796 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-14 02:40:33.609812 | orchestrator | 2025-05-14 02:40:33.609828 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-14 02:40:33.609845 | orchestrator | Wednesday 14 May 2025 02:39:20 +0000 (0:00:02.968) 0:00:03.110 ********* 2025-05-14 02:40:33.609867 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-14 02:40:33.609889 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:40:33.609938 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:40:33.609956 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 02:40:33.609976 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:40:33.610001 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-14 02:40:33.610108 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-14 02:40:33.610133 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-14 02:40:33.610152 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-14 02:40:33.610171 | orchestrator | 2025-05-14 02:40:33.610188 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-14 02:40:33.610203 | orchestrator | Wednesday 14 May 2025 02:39:21 +0000 (0:00:00.216) 0:00:03.327 ********* 2025-05-14 02:40:33.610220 | orchestrator | ok: [testbed-manager] => { 2025-05-14 02:40:33.610240 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-14 02:40:33.610260 | orchestrator | } 2025-05-14 02:40:33.610279 | orchestrator | 2025-05-14 02:40:33.610297 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-14 02:40:33.610314 | orchestrator | Wednesday 14 May 2025 02:39:21 +0000 (0:00:00.155) 0:00:03.482 ********* 2025-05-14 02:40:33.610332 | orchestrator | changed: [testbed-manager] 2025-05-14 02:40:33.610349 | orchestrator | 2025-05-14 02:40:33.610367 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-14 02:40:33.610384 | orchestrator | Wednesday 14 May 2025 02:39:55 +0000 (0:00:34.223) 0:00:37.705 ********* 2025-05-14 02:40:33.610402 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-14 02:40:33.610420 | orchestrator | 2025-05-14 02:40:33.610437 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-14 02:40:33.610454 | orchestrator | Wednesday 14 May 2025 02:39:55 +0000 (0:00:00.477) 0:00:38.183 ********* 2025-05-14 02:40:33.610473 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-14 02:40:33.610492 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-14 02:40:33.610515 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-14 02:40:33.610534 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-14 02:40:33.610599 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-14 02:40:33.610639 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-14 02:40:33.610670 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-14 02:40:33.610688 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-14 02:40:33.610723 | orchestrator | 2025-05-14 02:40:33.610739 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-14 02:40:33.610755 | orchestrator | Wednesday 14 May 2025 02:39:58 +0000 (0:00:02.551) 0:00:40.734 ********* 2025-05-14 02:40:33.610771 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:40:33.610787 | orchestrator | 2025-05-14 02:40:33.610803 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:40:33.611025 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:40:33.611043 | orchestrator | 2025-05-14 02:40:33.611060 | orchestrator | Wednesday 14 May 2025 02:39:58 +0000 (0:00:00.023) 0:00:40.758 ********* 2025-05-14 02:40:33.611075 | orchestrator | =============================================================================== 2025-05-14 02:40:33.611091 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 34.22s 2025-05-14 02:40:33.611107 | orchestrator | Check ceph keys --------------------------------------------------------- 2.97s 2025-05-14 02:40:33.611123 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.55s 2025-05-14 02:40:33.611137 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.48s 2025-05-14 02:40:33.611151 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.22s 2025-05-14 02:40:33.611165 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.16s 2025-05-14 02:40:33.611179 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.02s 2025-05-14 02:40:33.611193 | orchestrator | 2025-05-14 02:40:33.611216 | orchestrator | 2025-05-14 02:40:33 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:33.611892 | orchestrator | 2025-05-14 02:40:33 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:33.613237 | orchestrator | 2025-05-14 02:40:33 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:33.614068 | orchestrator | 2025-05-14 02:40:33 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:33.614109 | orchestrator | 2025-05-14 02:40:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:36.658340 | orchestrator | 2025-05-14 02:40:36 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:40:36.658672 | orchestrator | 2025-05-14 02:40:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:36.659261 | orchestrator | 2025-05-14 02:40:36 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:36.661988 | orchestrator | 2025-05-14 02:40:36 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:36.662704 | orchestrator | 2025-05-14 02:40:36 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:36.663301 | orchestrator | 2025-05-14 02:40:36 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:36.663328 | orchestrator | 2025-05-14 02:40:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:39.704948 | orchestrator | 2025-05-14 02:40:39 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:40:39.705816 | orchestrator | 2025-05-14 02:40:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:39.707047 | orchestrator | 2025-05-14 02:40:39 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:39.708488 | orchestrator | 2025-05-14 02:40:39 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:39.709301 | orchestrator | 2025-05-14 02:40:39 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:39.710373 | orchestrator | 2025-05-14 02:40:39 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:39.710504 | orchestrator | 2025-05-14 02:40:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:42.747793 | orchestrator | 2025-05-14 02:40:42 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:40:42.749399 | orchestrator | 2025-05-14 02:40:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:42.750306 | orchestrator | 2025-05-14 02:40:42 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:42.751515 | orchestrator | 2025-05-14 02:40:42 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:42.753286 | orchestrator | 2025-05-14 02:40:42 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:42.754225 | orchestrator | 2025-05-14 02:40:42 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:42.754279 | orchestrator | 2025-05-14 02:40:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:45.800924 | orchestrator | 2025-05-14 02:40:45 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:40:45.801841 | orchestrator | 2025-05-14 02:40:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:45.807009 | orchestrator | 2025-05-14 02:40:45 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:45.807066 | orchestrator | 2025-05-14 02:40:45 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:45.808309 | orchestrator | 2025-05-14 02:40:45 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:45.808626 | orchestrator | 2025-05-14 02:40:45 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:45.808653 | orchestrator | 2025-05-14 02:40:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:48.862214 | orchestrator | 2025-05-14 02:40:48 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:40:48.862254 | orchestrator | 2025-05-14 02:40:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:48.862259 | orchestrator | 2025-05-14 02:40:48 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:48.862263 | orchestrator | 2025-05-14 02:40:48 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:48.865358 | orchestrator | 2025-05-14 02:40:48 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:48.868169 | orchestrator | 2025-05-14 02:40:48 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:48.868239 | orchestrator | 2025-05-14 02:40:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:51.922487 | orchestrator | 2025-05-14 02:40:51 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:40:51.923026 | orchestrator | 2025-05-14 02:40:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:51.924665 | orchestrator | 2025-05-14 02:40:51 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:51.925661 | orchestrator | 2025-05-14 02:40:51 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:51.926456 | orchestrator | 2025-05-14 02:40:51 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:51.927433 | orchestrator | 2025-05-14 02:40:51 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:51.927550 | orchestrator | 2025-05-14 02:40:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:54.998995 | orchestrator | 2025-05-14 02:40:54 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:40:55.000223 | orchestrator | 2025-05-14 02:40:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:55.002871 | orchestrator | 2025-05-14 02:40:55 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:55.006744 | orchestrator | 2025-05-14 02:40:55 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:55.010136 | orchestrator | 2025-05-14 02:40:55 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state STARTED 2025-05-14 02:40:55.011752 | orchestrator | 2025-05-14 02:40:55 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:55.011802 | orchestrator | 2025-05-14 02:40:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:58.059296 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:40:58.059421 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:40:58.059973 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:40:58.060697 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:40:58.062804 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:40:58.063663 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task 33238f9f-5050-4a26-8c6d-43e2d7273a48 is in state SUCCESS 2025-05-14 02:40:58.064226 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:40:58.064390 | orchestrator | 2025-05-14 02:40:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:01.104936 | orchestrator | 2025-05-14 02:41:01 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:01.105142 | orchestrator | 2025-05-14 02:41:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:01.106168 | orchestrator | 2025-05-14 02:41:01 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:01.107308 | orchestrator | 2025-05-14 02:41:01 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:01.108191 | orchestrator | 2025-05-14 02:41:01 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:01.108920 | orchestrator | 2025-05-14 02:41:01 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:01.109022 | orchestrator | 2025-05-14 02:41:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:04.170289 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:04.170923 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:04.172407 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:04.173405 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:04.176489 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:04.176596 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:04.176616 | orchestrator | 2025-05-14 02:41:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:07.219788 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:07.219871 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:07.219882 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:07.221431 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:07.225870 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:07.225962 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:07.225979 | orchestrator | 2025-05-14 02:41:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:10.256587 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:10.256853 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:10.257749 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:10.258695 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:10.259190 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:10.260031 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:10.260057 | orchestrator | 2025-05-14 02:41:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:13.280190 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:13.280327 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:13.281958 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:13.282687 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:13.283194 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:13.283782 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:13.283861 | orchestrator | 2025-05-14 02:41:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:16.311443 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:16.311799 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:16.312477 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:16.313599 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:16.315349 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:16.315426 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:16.315443 | orchestrator | 2025-05-14 02:41:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:19.359224 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:19.359776 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:19.360896 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:19.361919 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:19.362595 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:19.363485 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:19.363747 | orchestrator | 2025-05-14 02:41:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:22.398184 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:22.398296 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:22.398933 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:22.403104 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:22.404374 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:22.409423 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:22.409631 | orchestrator | 2025-05-14 02:41:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:25.444342 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:25.444587 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:25.445078 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:25.445948 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:25.446808 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:25.447172 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:25.447296 | orchestrator | 2025-05-14 02:41:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:28.489077 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:28.489415 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:28.489989 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:28.491312 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:28.492238 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:28.494006 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:28.494098 | orchestrator | 2025-05-14 02:41:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:31.521346 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:31.521483 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:31.522132 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:31.522947 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:31.523714 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:31.524649 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:31.524780 | orchestrator | 2025-05-14 02:41:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:34.563347 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:34.563604 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:34.564881 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state STARTED 2025-05-14 02:41:34.566729 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:34.567583 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:34.568262 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:34.568411 | orchestrator | 2025-05-14 02:41:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:37.597033 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:37.597236 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:37.598071 | orchestrator | 2025-05-14 02:41:37.598099 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-14 02:41:37.598113 | orchestrator | 2025-05-14 02:41:37.598125 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-14 02:41:37.598139 | orchestrator | Wednesday 14 May 2025 02:39:57 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-05-14 02:41:37.598152 | orchestrator | changed: [localhost] 2025-05-14 02:41:37.598165 | orchestrator | 2025-05-14 02:41:37.598178 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-14 02:41:37.598191 | orchestrator | Wednesday 14 May 2025 02:39:58 +0000 (0:00:00.647) 0:00:00.799 ********* 2025-05-14 02:41:37.598204 | orchestrator | changed: [localhost] 2025-05-14 02:41:37.598217 | orchestrator | 2025-05-14 02:41:37.598229 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-14 02:41:37.598240 | orchestrator | Wednesday 14 May 2025 02:40:27 +0000 (0:00:29.042) 0:00:29.841 ********* 2025-05-14 02:41:37.598251 | orchestrator | changed: [localhost] 2025-05-14 02:41:37.598262 | orchestrator | 2025-05-14 02:41:37.598273 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:41:37.598318 | orchestrator | 2025-05-14 02:41:37.598330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:41:37.598340 | orchestrator | Wednesday 14 May 2025 02:40:31 +0000 (0:00:03.761) 0:00:33.603 ********* 2025-05-14 02:41:37.598351 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:37.598362 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:41:37.598373 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:41:37.598384 | orchestrator | 2025-05-14 02:41:37.598395 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:41:37.598406 | orchestrator | Wednesday 14 May 2025 02:40:31 +0000 (0:00:00.479) 0:00:34.082 ********* 2025-05-14 02:41:37.598417 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-14 02:41:37.598470 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-14 02:41:37.598482 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-14 02:41:37.598494 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-14 02:41:37.598504 | orchestrator | 2025-05-14 02:41:37.598567 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-14 02:41:37.598579 | orchestrator | skipping: no hosts matched 2025-05-14 02:41:37.598591 | orchestrator | 2025-05-14 02:41:37.598602 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:41:37.598631 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:41:37.598646 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:41:37.598660 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:41:37.598671 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:41:37.598682 | orchestrator | 2025-05-14 02:41:37.598693 | orchestrator | 2025-05-14 02:41:37.598704 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:41:37.598715 | orchestrator | Wednesday 14 May 2025 02:40:32 +0000 (0:00:00.531) 0:00:34.614 ********* 2025-05-14 02:41:37.598726 | orchestrator | =============================================================================== 2025-05-14 02:41:37.598737 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 29.04s 2025-05-14 02:41:37.598748 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.76s 2025-05-14 02:41:37.598759 | orchestrator | Ensure the destination directory exists --------------------------------- 0.65s 2025-05-14 02:41:37.598770 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-05-14 02:41:37.598781 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2025-05-14 02:41:37.598792 | orchestrator | 2025-05-14 02:41:37.598803 | orchestrator | 2025-05-14 02:41:37.598814 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-14 02:41:37.598824 | orchestrator | 2025-05-14 02:41:37.598835 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-14 02:41:37.598846 | orchestrator | Wednesday 14 May 2025 02:40:01 +0000 (0:00:00.129) 0:00:00.129 ********* 2025-05-14 02:41:37.598857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-14 02:41:37.598868 | orchestrator | 2025-05-14 02:41:37.598879 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-14 02:41:37.598890 | orchestrator | Wednesday 14 May 2025 02:40:01 +0000 (0:00:00.163) 0:00:00.293 ********* 2025-05-14 02:41:37.598901 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-14 02:41:37.598912 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-14 02:41:37.598934 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-14 02:41:37.598945 | orchestrator | 2025-05-14 02:41:37.598956 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-14 02:41:37.598967 | orchestrator | Wednesday 14 May 2025 02:40:02 +0000 (0:00:01.031) 0:00:01.325 ********* 2025-05-14 02:41:37.598979 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-14 02:41:37.598990 | orchestrator | 2025-05-14 02:41:37.599001 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-14 02:41:37.599012 | orchestrator | Wednesday 14 May 2025 02:40:03 +0000 (0:00:00.959) 0:00:02.284 ********* 2025-05-14 02:41:37.599036 | orchestrator | changed: [testbed-manager] 2025-05-14 02:41:37.599048 | orchestrator | 2025-05-14 02:41:37.599059 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-14 02:41:37.599070 | orchestrator | Wednesday 14 May 2025 02:40:04 +0000 (0:00:00.791) 0:00:03.076 ********* 2025-05-14 02:41:37.599081 | orchestrator | changed: [testbed-manager] 2025-05-14 02:41:37.599092 | orchestrator | 2025-05-14 02:41:37.599103 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-14 02:41:37.599114 | orchestrator | Wednesday 14 May 2025 02:40:05 +0000 (0:00:00.966) 0:00:04.042 ********* 2025-05-14 02:41:37.599125 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-14 02:41:37.599136 | orchestrator | ok: [testbed-manager] 2025-05-14 02:41:37.599147 | orchestrator | 2025-05-14 02:41:37.599158 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-14 02:41:37.599169 | orchestrator | Wednesday 14 May 2025 02:40:46 +0000 (0:00:41.123) 0:00:45.166 ********* 2025-05-14 02:41:37.599179 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-14 02:41:37.599228 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-14 02:41:37.599241 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-14 02:41:37.599252 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-14 02:41:37.599263 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-14 02:41:37.599274 | orchestrator | 2025-05-14 02:41:37.599285 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-14 02:41:37.599296 | orchestrator | Wednesday 14 May 2025 02:40:50 +0000 (0:00:04.151) 0:00:49.318 ********* 2025-05-14 02:41:37.599306 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-14 02:41:37.599317 | orchestrator | 2025-05-14 02:41:37.599328 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-14 02:41:37.599339 | orchestrator | Wednesday 14 May 2025 02:40:50 +0000 (0:00:00.447) 0:00:49.766 ********* 2025-05-14 02:41:37.599349 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:41:37.599360 | orchestrator | 2025-05-14 02:41:37.599371 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-14 02:41:37.599381 | orchestrator | Wednesday 14 May 2025 02:40:50 +0000 (0:00:00.133) 0:00:49.899 ********* 2025-05-14 02:41:37.599392 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:41:37.599402 | orchestrator | 2025-05-14 02:41:37.599413 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-14 02:41:37.599431 | orchestrator | Wednesday 14 May 2025 02:40:51 +0000 (0:00:00.292) 0:00:50.192 ********* 2025-05-14 02:41:37.599441 | orchestrator | changed: [testbed-manager] 2025-05-14 02:41:37.599452 | orchestrator | 2025-05-14 02:41:37.599463 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-14 02:41:37.599473 | orchestrator | Wednesday 14 May 2025 02:40:52 +0000 (0:00:01.386) 0:00:51.578 ********* 2025-05-14 02:41:37.599484 | orchestrator | changed: [testbed-manager] 2025-05-14 02:41:37.599494 | orchestrator | 2025-05-14 02:41:37.599505 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-14 02:41:37.599538 | orchestrator | Wednesday 14 May 2025 02:40:53 +0000 (0:00:01.126) 0:00:52.705 ********* 2025-05-14 02:41:37.599557 | orchestrator | changed: [testbed-manager] 2025-05-14 02:41:37.599568 | orchestrator | 2025-05-14 02:41:37.599579 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-14 02:41:37.599591 | orchestrator | Wednesday 14 May 2025 02:40:54 +0000 (0:00:00.571) 0:00:53.276 ********* 2025-05-14 02:41:37.599602 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-14 02:41:37.599613 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-14 02:41:37.599624 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-14 02:41:37.599635 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-14 02:41:37.599645 | orchestrator | 2025-05-14 02:41:37.599657 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:41:37.599668 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:41:37.599679 | orchestrator | 2025-05-14 02:41:37.599690 | orchestrator | Wednesday 14 May 2025 02:40:55 +0000 (0:00:01.475) 0:00:54.752 ********* 2025-05-14 02:41:37.599701 | orchestrator | =============================================================================== 2025-05-14 02:41:37.599712 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.12s 2025-05-14 02:41:37.599723 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.15s 2025-05-14 02:41:37.599734 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.48s 2025-05-14 02:41:37.599745 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.39s 2025-05-14 02:41:37.599755 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.13s 2025-05-14 02:41:37.599766 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.03s 2025-05-14 02:41:37.599777 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2025-05-14 02:41:37.599788 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 0.96s 2025-05-14 02:41:37.599798 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.79s 2025-05-14 02:41:37.599810 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2025-05-14 02:41:37.599821 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-05-14 02:41:37.599831 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-05-14 02:41:37.599842 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.16s 2025-05-14 02:41:37.599853 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-05-14 02:41:37.599864 | orchestrator | 2025-05-14 02:41:37.599883 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task bf540252-aec0-46c7-bb8a-fd3d5daa61e3 is in state SUCCESS 2025-05-14 02:41:37.600045 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:37.600063 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:37.600075 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:37.600087 | orchestrator | 2025-05-14 02:41:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:40.633760 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:40.634548 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:40.635402 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:40.636834 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:40.637865 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:40.637919 | orchestrator | 2025-05-14 02:41:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:43.666760 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:43.667927 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:43.668669 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:43.669143 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:43.670703 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:43.671205 | orchestrator | 2025-05-14 02:41:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:46.710812 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:46.711299 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:46.712019 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:46.712878 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:46.713799 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:46.713831 | orchestrator | 2025-05-14 02:41:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:49.753028 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:49.753207 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:49.753836 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:49.754553 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:49.755215 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:49.755251 | orchestrator | 2025-05-14 02:41:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:52.788977 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:52.790572 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:52.790608 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:52.790621 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:52.790877 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:52.790902 | orchestrator | 2025-05-14 02:41:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:55.823065 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:55.824703 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:55.826893 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:55.829297 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:55.830367 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:55.830390 | orchestrator | 2025-05-14 02:41:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:58.858738 | orchestrator | 2025-05-14 02:41:58 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:41:58.861398 | orchestrator | 2025-05-14 02:41:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:41:58.861433 | orchestrator | 2025-05-14 02:41:58 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:41:58.861446 | orchestrator | 2025-05-14 02:41:58 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:41:58.861458 | orchestrator | 2025-05-14 02:41:58 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:41:58.861470 | orchestrator | 2025-05-14 02:41:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:01.899007 | orchestrator | 2025-05-14 02:42:01 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state STARTED 2025-05-14 02:42:01.899082 | orchestrator | 2025-05-14 02:42:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:01.899146 | orchestrator | 2025-05-14 02:42:01 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:42:01.899355 | orchestrator | 2025-05-14 02:42:01 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:01.900158 | orchestrator | 2025-05-14 02:42:01 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:01.900253 | orchestrator | 2025-05-14 02:42:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:04.954166 | orchestrator | 2025-05-14 02:42:04 | INFO  | Task fb16c7ae-b1e4-499e-b707-07e5522b0a7c is in state SUCCESS 2025-05-14 02:42:04.954300 | orchestrator | 2025-05-14 02:42:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:04.955885 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:42:04.955943 | orchestrator | 2025-05-14 02:42:04.955956 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-14 02:42:04.955968 | orchestrator | 2025-05-14 02:42:04.955980 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-14 02:42:04.955991 | orchestrator | Wednesday 14 May 2025 02:40:59 +0000 (0:00:00.430) 0:00:00.430 ********* 2025-05-14 02:42:04.956003 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:04.956015 | orchestrator | 2025-05-14 02:42:04.956026 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-14 02:42:04.956037 | orchestrator | Wednesday 14 May 2025 02:41:01 +0000 (0:00:02.139) 0:00:02.569 ********* 2025-05-14 02:42:04.956048 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:04.956059 | orchestrator | 2025-05-14 02:42:04.956070 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-14 02:42:04.956081 | orchestrator | Wednesday 14 May 2025 02:41:02 +0000 (0:00:01.033) 0:00:03.602 ********* 2025-05-14 02:42:04.956092 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:04.956103 | orchestrator | 2025-05-14 02:42:04.956114 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-14 02:42:04.956125 | orchestrator | Wednesday 14 May 2025 02:41:03 +0000 (0:00:01.077) 0:00:04.679 ********* 2025-05-14 02:42:04.956136 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:04.956173 | orchestrator | 2025-05-14 02:42:04.956185 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-14 02:42:04.956196 | orchestrator | Wednesday 14 May 2025 02:41:04 +0000 (0:00:01.092) 0:00:05.772 ********* 2025-05-14 02:42:04.956207 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:04.956217 | orchestrator | 2025-05-14 02:42:04.956229 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-14 02:42:04.956239 | orchestrator | Wednesday 14 May 2025 02:41:05 +0000 (0:00:01.144) 0:00:06.916 ********* 2025-05-14 02:42:04.956250 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:04.956261 | orchestrator | 2025-05-14 02:42:04.956272 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-14 02:42:04.956283 | orchestrator | Wednesday 14 May 2025 02:41:06 +0000 (0:00:00.998) 0:00:07.915 ********* 2025-05-14 02:42:04.956294 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:04.956304 | orchestrator | 2025-05-14 02:42:04.956315 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-14 02:42:04.956326 | orchestrator | Wednesday 14 May 2025 02:41:08 +0000 (0:00:01.195) 0:00:09.111 ********* 2025-05-14 02:42:04.956337 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:04.956348 | orchestrator | 2025-05-14 02:42:04.956359 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-14 02:42:04.956371 | orchestrator | Wednesday 14 May 2025 02:41:09 +0000 (0:00:01.027) 0:00:10.138 ********* 2025-05-14 02:42:04.956381 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:04.956392 | orchestrator | 2025-05-14 02:42:04.956403 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-14 02:42:04.956414 | orchestrator | Wednesday 14 May 2025 02:41:28 +0000 (0:00:19.432) 0:00:29.570 ********* 2025-05-14 02:42:04.956425 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:42:04.956436 | orchestrator | 2025-05-14 02:42:04.956446 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-14 02:42:04.956457 | orchestrator | 2025-05-14 02:42:04.956468 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-14 02:42:04.956479 | orchestrator | Wednesday 14 May 2025 02:41:29 +0000 (0:00:00.651) 0:00:30.221 ********* 2025-05-14 02:42:04.956490 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:04.956523 | orchestrator | 2025-05-14 02:42:04.956534 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-14 02:42:04.956545 | orchestrator | 2025-05-14 02:42:04.956556 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-14 02:42:04.956567 | orchestrator | Wednesday 14 May 2025 02:41:31 +0000 (0:00:02.039) 0:00:32.261 ********* 2025-05-14 02:42:04.956578 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:42:04.956589 | orchestrator | 2025-05-14 02:42:04.956600 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-14 02:42:04.956611 | orchestrator | 2025-05-14 02:42:04.956622 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-14 02:42:04.956633 | orchestrator | Wednesday 14 May 2025 02:41:33 +0000 (0:00:01.837) 0:00:34.099 ********* 2025-05-14 02:42:04.956644 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:42:04.956655 | orchestrator | 2025-05-14 02:42:04.956665 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:42:04.956678 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:42:04.956690 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:42:04.956716 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:42:04.956728 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:42:04.956747 | orchestrator | 2025-05-14 02:42:04.956758 | orchestrator | 2025-05-14 02:42:04.956769 | orchestrator | 2025-05-14 02:42:04.956780 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:42:04.956791 | orchestrator | Wednesday 14 May 2025 02:41:34 +0000 (0:00:01.464) 0:00:35.563 ********* 2025-05-14 02:42:04.956801 | orchestrator | =============================================================================== 2025-05-14 02:42:04.956812 | orchestrator | Create admin user ------------------------------------------------------ 19.43s 2025-05-14 02:42:04.956838 | orchestrator | Restart ceph manager service -------------------------------------------- 5.34s 2025-05-14 02:42:04.956849 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.14s 2025-05-14 02:42:04.956860 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.20s 2025-05-14 02:42:04.956871 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.14s 2025-05-14 02:42:04.956883 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.09s 2025-05-14 02:42:04.956894 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.08s 2025-05-14 02:42:04.956905 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.03s 2025-05-14 02:42:04.956916 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.03s 2025-05-14 02:42:04.956927 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.00s 2025-05-14 02:42:04.956938 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.65s 2025-05-14 02:42:04.956949 | orchestrator | 2025-05-14 02:42:04.956960 | orchestrator | 2025-05-14 02:42:04.956971 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:42:04.956982 | orchestrator | 2025-05-14 02:42:04.956993 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:42:04.957004 | orchestrator | Wednesday 14 May 2025 02:40:36 +0000 (0:00:00.607) 0:00:00.607 ********* 2025-05-14 02:42:04.957015 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:42:04.957027 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:42:04.957038 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:42:04.957049 | orchestrator | 2025-05-14 02:42:04.957060 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:42:04.957071 | orchestrator | Wednesday 14 May 2025 02:40:36 +0000 (0:00:00.778) 0:00:01.386 ********* 2025-05-14 02:42:04.957082 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-14 02:42:04.957094 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-14 02:42:04.957104 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-14 02:42:04.957115 | orchestrator | 2025-05-14 02:42:04.957126 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-14 02:42:04.957137 | orchestrator | 2025-05-14 02:42:04.957148 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-14 02:42:04.957159 | orchestrator | Wednesday 14 May 2025 02:40:37 +0000 (0:00:00.721) 0:00:02.107 ********* 2025-05-14 02:42:04.957170 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:42:04.957183 | orchestrator | 2025-05-14 02:42:04.957193 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-14 02:42:04.957204 | orchestrator | Wednesday 14 May 2025 02:40:38 +0000 (0:00:00.679) 0:00:02.787 ********* 2025-05-14 02:42:04.957215 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-14 02:42:04.957226 | orchestrator | 2025-05-14 02:42:04.957237 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-14 02:42:04.957248 | orchestrator | Wednesday 14 May 2025 02:40:41 +0000 (0:00:03.317) 0:00:06.105 ********* 2025-05-14 02:42:04.957259 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-14 02:42:04.957278 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-14 02:42:04.957289 | orchestrator | 2025-05-14 02:42:04.957300 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-14 02:42:04.957311 | orchestrator | Wednesday 14 May 2025 02:40:48 +0000 (0:00:06.811) 0:00:12.916 ********* 2025-05-14 02:42:04.957322 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:42:04.957333 | orchestrator | 2025-05-14 02:42:04.957345 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-14 02:42:04.957356 | orchestrator | Wednesday 14 May 2025 02:40:52 +0000 (0:00:04.391) 0:00:17.308 ********* 2025-05-14 02:42:04.957367 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:42:04.957378 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-14 02:42:04.957389 | orchestrator | 2025-05-14 02:42:04.957400 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-14 02:42:04.957410 | orchestrator | Wednesday 14 May 2025 02:40:57 +0000 (0:00:04.605) 0:00:21.913 ********* 2025-05-14 02:42:04.957421 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:42:04.957432 | orchestrator | 2025-05-14 02:42:04.957443 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-14 02:42:04.957453 | orchestrator | Wednesday 14 May 2025 02:41:00 +0000 (0:00:03.529) 0:00:25.443 ********* 2025-05-14 02:42:04.957464 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-14 02:42:04.957475 | orchestrator | 2025-05-14 02:42:04.957486 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-14 02:42:04.957513 | orchestrator | Wednesday 14 May 2025 02:41:05 +0000 (0:00:05.029) 0:00:30.473 ********* 2025-05-14 02:42:04.957530 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:04.957541 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:04.957552 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:04.957563 | orchestrator | 2025-05-14 02:42:04.957574 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-14 02:42:04.957585 | orchestrator | Wednesday 14 May 2025 02:41:06 +0000 (0:00:00.751) 0:00:31.224 ********* 2025-05-14 02:42:04.957609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.957625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.957645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.957657 | orchestrator | 2025-05-14 02:42:04.957668 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-14 02:42:04.957679 | orchestrator | Wednesday 14 May 2025 02:41:08 +0000 (0:00:01.491) 0:00:32.716 ********* 2025-05-14 02:42:04.957690 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:04.957701 | orchestrator | 2025-05-14 02:42:04.957712 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-14 02:42:04.957723 | orchestrator | Wednesday 14 May 2025 02:41:08 +0000 (0:00:00.370) 0:00:33.086 ********* 2025-05-14 02:42:04.957733 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:04.957744 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:04.957755 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:04.957766 | orchestrator | 2025-05-14 02:42:04.957777 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-14 02:42:04.957788 | orchestrator | Wednesday 14 May 2025 02:41:09 +0000 (0:00:00.930) 0:00:34.016 ********* 2025-05-14 02:42:04.957799 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:42:04.957810 | orchestrator | 2025-05-14 02:42:04.957820 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-14 02:42:04.957831 | orchestrator | Wednesday 14 May 2025 02:41:11 +0000 (0:00:02.027) 0:00:36.044 ********* 2025-05-14 02:42:04.957855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.957868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.957889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.957900 | orchestrator | 2025-05-14 02:42:04.957911 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-14 02:42:04.957922 | orchestrator | Wednesday 14 May 2025 02:41:13 +0000 (0:00:02.157) 0:00:38.201 ********* 2025-05-14 02:42:04.957934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:42:04.957945 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:04.957962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:42:04.957979 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:04.957991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:42:04.958009 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:04.958068 | orchestrator | 2025-05-14 02:42:04.958080 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-14 02:42:04.958091 | orchestrator | Wednesday 14 May 2025 02:41:14 +0000 (0:00:01.008) 0:00:39.209 ********* 2025-05-14 02:42:04.958102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:42:04.958114 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:04.958125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:42:04.958137 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:04.958154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:42:04.958166 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:04.958177 | orchestrator | 2025-05-14 02:42:04.958195 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-14 02:42:04.958206 | orchestrator | Wednesday 14 May 2025 02:41:16 +0000 (0:00:01.762) 0:00:40.972 ********* 2025-05-14 02:42:04.958218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.958237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.958249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.958261 | orchestrator | 2025-05-14 02:42:04.958272 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-14 02:42:04.958283 | orchestrator | Wednesday 14 May 2025 02:41:18 +0000 (0:00:02.462) 0:00:43.435 ********* 2025-05-14 02:42:04.958299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.958319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.958345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.958357 | orchestrator | 2025-05-14 02:42:04.958368 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-14 02:42:04.958379 | orchestrator | Wednesday 14 May 2025 02:41:22 +0000 (0:00:04.105) 0:00:47.540 ********* 2025-05-14 02:42:04.958391 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-14 02:42:04.958402 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-14 02:42:04.958413 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-14 02:42:04.958424 | orchestrator | 2025-05-14 02:42:04.958435 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-14 02:42:04.958446 | orchestrator | Wednesday 14 May 2025 02:41:26 +0000 (0:00:03.276) 0:00:50.816 ********* 2025-05-14 02:42:04.958457 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:04.958468 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:42:04.958479 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:42:04.958490 | orchestrator | 2025-05-14 02:42:04.958518 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-14 02:42:04.958529 | orchestrator | Wednesday 14 May 2025 02:41:29 +0000 (0:00:03.306) 0:00:54.122 ********* 2025-05-14 02:42:04.958541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:42:04.958553 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:04.958583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:42:04.958603 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:04.958614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:42:04.958626 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:04.958637 | orchestrator | 2025-05-14 02:42:04.958648 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-14 02:42:04.958658 | orchestrator | Wednesday 14 May 2025 02:41:31 +0000 (0:00:01.903) 0:00:56.026 ********* 2025-05-14 02:42:04.958670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.958682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.958710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:04.958723 | orchestrator | 2025-05-14 02:42:04.958734 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-14 02:42:04.958745 | orchestrator | Wednesday 14 May 2025 02:41:33 +0000 (0:00:01.758) 0:00:57.785 ********* 2025-05-14 02:42:04.958755 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:04.958766 | orchestrator | 2025-05-14 02:42:04.958777 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-14 02:42:04.958788 | orchestrator | Wednesday 14 May 2025 02:41:36 +0000 (0:00:02.866) 0:01:00.651 ********* 2025-05-14 02:42:04.958799 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:04.958809 | orchestrator | 2025-05-14 02:42:04.958820 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-14 02:42:04.958831 | orchestrator | Wednesday 14 May 2025 02:41:38 +0000 (0:00:02.599) 0:01:03.250 ********* 2025-05-14 02:42:04.958842 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:04.958852 | orchestrator | 2025-05-14 02:42:04.958863 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-14 02:42:04.958874 | orchestrator | Wednesday 14 May 2025 02:41:51 +0000 (0:00:13.119) 0:01:16.370 ********* 2025-05-14 02:42:04.958885 | orchestrator | 2025-05-14 02:42:04.958895 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-14 02:42:04.958906 | orchestrator | Wednesday 14 May 2025 02:41:51 +0000 (0:00:00.111) 0:01:16.482 ********* 2025-05-14 02:42:04.958917 | orchestrator | 2025-05-14 02:42:04.958928 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-14 02:42:04.958938 | orchestrator | Wednesday 14 May 2025 02:41:52 +0000 (0:00:00.319) 0:01:16.802 ********* 2025-05-14 02:42:04.958949 | orchestrator | 2025-05-14 02:42:04.958960 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-14 02:42:04.958970 | orchestrator | Wednesday 14 May 2025 02:41:52 +0000 (0:00:00.097) 0:01:16.900 ********* 2025-05-14 02:42:04.958981 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:04.958992 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:42:04.959003 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:42:04.959013 | orchestrator | 2025-05-14 02:42:04.959024 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:42:04.959035 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:42:04.959046 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:42:04.959057 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:42:04.959068 | orchestrator | 2025-05-14 02:42:04.959079 | orchestrator | 2025-05-14 02:42:04.959090 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:42:04.959100 | orchestrator | Wednesday 14 May 2025 02:42:02 +0000 (0:00:10.246) 0:01:27.146 ********* 2025-05-14 02:42:04.959118 | orchestrator | =============================================================================== 2025-05-14 02:42:04.959129 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.12s 2025-05-14 02:42:04.959139 | orchestrator | placement : Restart placement-api container ---------------------------- 10.25s 2025-05-14 02:42:04.959150 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.81s 2025-05-14 02:42:04.959161 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 5.03s 2025-05-14 02:42:04.959171 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.61s 2025-05-14 02:42:04.959182 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.39s 2025-05-14 02:42:04.959193 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.11s 2025-05-14 02:42:04.959204 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.53s 2025-05-14 02:42:04.959214 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.32s 2025-05-14 02:42:04.959225 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 3.31s 2025-05-14 02:42:04.959236 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 3.28s 2025-05-14 02:42:04.959246 | orchestrator | placement : Creating placement databases -------------------------------- 2.87s 2025-05-14 02:42:04.959257 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.60s 2025-05-14 02:42:04.959267 | orchestrator | placement : Copying over config.json files for services ----------------- 2.46s 2025-05-14 02:42:04.959278 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.16s 2025-05-14 02:42:04.959294 | orchestrator | placement : include_tasks ----------------------------------------------- 2.03s 2025-05-14 02:42:04.959304 | orchestrator | placement : Copying over existing policy file --------------------------- 1.90s 2025-05-14 02:42:04.959315 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.76s 2025-05-14 02:42:04.959326 | orchestrator | placement : Check placement containers ---------------------------------- 1.76s 2025-05-14 02:42:04.959336 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.49s 2025-05-14 02:42:04.959347 | orchestrator | 2025-05-14 02:42:04 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:42:04.959363 | orchestrator | 2025-05-14 02:42:04 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:04.959375 | orchestrator | 2025-05-14 02:42:04 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:04.959386 | orchestrator | 2025-05-14 02:42:04 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:04.959556 | orchestrator | 2025-05-14 02:42:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:07.979899 | orchestrator | 2025-05-14 02:42:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:07.981979 | orchestrator | 2025-05-14 02:42:07 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:42:07.982564 | orchestrator | 2025-05-14 02:42:07 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:07.983771 | orchestrator | 2025-05-14 02:42:07 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:07.984270 | orchestrator | 2025-05-14 02:42:07 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:07.984439 | orchestrator | 2025-05-14 02:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:11.018572 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:11.018748 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:42:11.019384 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:11.020167 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:11.020728 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:11.020763 | orchestrator | 2025-05-14 02:42:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:14.061633 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:14.062315 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:42:14.062793 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:14.063386 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:14.064713 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:14.064738 | orchestrator | 2025-05-14 02:42:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:17.111780 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:17.111925 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:42:17.112355 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:17.112930 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:17.113532 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:17.113556 | orchestrator | 2025-05-14 02:42:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:20.149192 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:20.151876 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state STARTED 2025-05-14 02:42:20.155845 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:20.157807 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:20.160461 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:20.160575 | orchestrator | 2025-05-14 02:42:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:23.188973 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:23.189099 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task af120697-6f66-4c9b-ab04-3a1ba9b1f0e7 is in state SUCCESS 2025-05-14 02:42:23.190706 | orchestrator | 2025-05-14 02:42:23.190777 | orchestrator | 2025-05-14 02:42:23.190788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:42:23.190797 | orchestrator | 2025-05-14 02:42:23.190804 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:42:23.190813 | orchestrator | Wednesday 14 May 2025 02:39:57 +0000 (0:00:00.372) 0:00:00.372 ********* 2025-05-14 02:42:23.190820 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:42:23.190829 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:42:23.190868 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:42:23.190875 | orchestrator | 2025-05-14 02:42:23.190883 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:42:23.190890 | orchestrator | Wednesday 14 May 2025 02:39:57 +0000 (0:00:00.568) 0:00:00.940 ********* 2025-05-14 02:42:23.190896 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-14 02:42:23.190904 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-14 02:42:23.190911 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-14 02:42:23.190917 | orchestrator | 2025-05-14 02:42:23.190924 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-14 02:42:23.190930 | orchestrator | 2025-05-14 02:42:23.190937 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-14 02:42:23.190943 | orchestrator | Wednesday 14 May 2025 02:39:58 +0000 (0:00:00.472) 0:00:01.413 ********* 2025-05-14 02:42:23.190951 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:42:23.190960 | orchestrator | 2025-05-14 02:42:23.190967 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-14 02:42:23.190974 | orchestrator | Wednesday 14 May 2025 02:39:59 +0000 (0:00:00.930) 0:00:02.343 ********* 2025-05-14 02:42:23.190982 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-14 02:42:23.190989 | orchestrator | 2025-05-14 02:42:23.190996 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-14 02:42:23.191003 | orchestrator | Wednesday 14 May 2025 02:40:02 +0000 (0:00:03.602) 0:00:05.946 ********* 2025-05-14 02:42:23.191010 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-14 02:42:23.191018 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-14 02:42:23.191025 | orchestrator | 2025-05-14 02:42:23.191031 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-14 02:42:23.191038 | orchestrator | Wednesday 14 May 2025 02:40:09 +0000 (0:00:06.806) 0:00:12.752 ********* 2025-05-14 02:42:23.191045 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating projects (5 retries left). 2025-05-14 02:42:23.191052 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:42:23.191059 | orchestrator | 2025-05-14 02:42:23.191065 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-14 02:42:23.191071 | orchestrator | Wednesday 14 May 2025 02:40:26 +0000 (0:00:16.994) 0:00:29.747 ********* 2025-05-14 02:42:23.191077 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:42:23.191083 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-14 02:42:23.191089 | orchestrator | 2025-05-14 02:42:23.191095 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-14 02:42:23.191101 | orchestrator | Wednesday 14 May 2025 02:40:30 +0000 (0:00:03.921) 0:00:33.668 ********* 2025-05-14 02:42:23.191107 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:42:23.191115 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-14 02:42:23.191121 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-14 02:42:23.191128 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-14 02:42:23.191135 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-14 02:42:23.191141 | orchestrator | 2025-05-14 02:42:23.191148 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-14 02:42:23.191155 | orchestrator | Wednesday 14 May 2025 02:40:47 +0000 (0:00:16.361) 0:00:50.030 ********* 2025-05-14 02:42:23.191161 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-14 02:42:23.191168 | orchestrator | 2025-05-14 02:42:23.191174 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-14 02:42:23.191191 | orchestrator | Wednesday 14 May 2025 02:40:52 +0000 (0:00:05.816) 0:00:55.846 ********* 2025-05-14 02:42:23.191272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.191284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.191291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.191300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191367 | orchestrator | 2025-05-14 02:42:23.191375 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-14 02:42:23.191381 | orchestrator | Wednesday 14 May 2025 02:40:55 +0000 (0:00:03.031) 0:00:58.878 ********* 2025-05-14 02:42:23.191389 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-14 02:42:23.191396 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-14 02:42:23.191404 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-14 02:42:23.191411 | orchestrator | 2025-05-14 02:42:23.191418 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-14 02:42:23.191425 | orchestrator | Wednesday 14 May 2025 02:40:59 +0000 (0:00:03.614) 0:01:02.492 ********* 2025-05-14 02:42:23.191432 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:23.191439 | orchestrator | 2025-05-14 02:42:23.191454 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-14 02:42:23.191460 | orchestrator | Wednesday 14 May 2025 02:40:59 +0000 (0:00:00.463) 0:01:02.960 ********* 2025-05-14 02:42:23.191467 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:23.191473 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:23.191606 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:23.191613 | orchestrator | 2025-05-14 02:42:23.191620 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-14 02:42:23.191627 | orchestrator | Wednesday 14 May 2025 02:41:00 +0000 (0:00:00.759) 0:01:03.719 ********* 2025-05-14 02:42:23.191634 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:42:23.191641 | orchestrator | 2025-05-14 02:42:23.191647 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-14 02:42:23.191655 | orchestrator | Wednesday 14 May 2025 02:41:02 +0000 (0:00:01.321) 0:01:05.041 ********* 2025-05-14 02:42:23.191678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.191689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.191697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.191704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.191772 | orchestrator | 2025-05-14 02:42:23.191779 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-14 02:42:23.191786 | orchestrator | Wednesday 14 May 2025 02:41:09 +0000 (0:00:07.029) 0:01:12.070 ********* 2025-05-14 02:42:23.191793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:42:23.191804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.191818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.191826 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:23.191833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:42:23.191841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.191855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.191862 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:23.191881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:42:23.191895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.191902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.191909 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:23.191916 | orchestrator | 2025-05-14 02:42:23.191922 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-14 02:42:23.191928 | orchestrator | Wednesday 14 May 2025 02:41:10 +0000 (0:00:01.692) 0:01:13.763 ********* 2025-05-14 02:42:23.191935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:42:23.191949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.191955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.191962 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:23.192064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:42:23.192080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:42:23.192093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.192100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.192107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.192113 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:23.192126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.192133 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:23.192139 | orchestrator | 2025-05-14 02:42:23.192153 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-14 02:42:23.192160 | orchestrator | Wednesday 14 May 2025 02:41:12 +0000 (0:00:01.656) 0:01:15.420 ********* 2025-05-14 02:42:23.192167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.192180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.192188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.192197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192251 | orchestrator | 2025-05-14 02:42:23.192258 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-14 02:42:23.192264 | orchestrator | Wednesday 14 May 2025 02:41:16 +0000 (0:00:03.860) 0:01:19.280 ********* 2025-05-14 02:42:23.192271 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:23.192278 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:42:23.192285 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:42:23.192291 | orchestrator | 2025-05-14 02:42:23.192298 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-14 02:42:23.192305 | orchestrator | Wednesday 14 May 2025 02:41:19 +0000 (0:00:02.728) 0:01:22.008 ********* 2025-05-14 02:42:23.192311 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:42:23.192318 | orchestrator | 2025-05-14 02:42:23.192328 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-14 02:42:23.192335 | orchestrator | Wednesday 14 May 2025 02:41:21 +0000 (0:00:02.680) 0:01:24.689 ********* 2025-05-14 02:42:23.192341 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:23.192348 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:23.192355 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:23.192361 | orchestrator | 2025-05-14 02:42:23.192368 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-14 02:42:23.192375 | orchestrator | Wednesday 14 May 2025 02:41:22 +0000 (0:00:01.137) 0:01:25.826 ********* 2025-05-14 02:42:23.192390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.192403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.192411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.192418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192473 | orchestrator | 2025-05-14 02:42:23.192480 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-14 02:42:23.192486 | orchestrator | Wednesday 14 May 2025 02:41:35 +0000 (0:00:12.792) 0:01:38.619 ********* 2025-05-14 02:42:23.192520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:42:23.192540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.192548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.192555 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:23.192563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:42:23.192570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.192578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.192585 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:23.192602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:42:23.192717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.192728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:42:23.192735 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:23.192742 | orchestrator | 2025-05-14 02:42:23.192748 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-14 02:42:23.192755 | orchestrator | Wednesday 14 May 2025 02:41:36 +0000 (0:00:00.936) 0:01:39.556 ********* 2025-05-14 02:42:23.192762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.192774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.192798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:42:23.192806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:42:23.192862 | orchestrator | 2025-05-14 02:42:23.192869 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-14 02:42:23.192875 | orchestrator | Wednesday 14 May 2025 02:41:40 +0000 (0:00:04.251) 0:01:43.808 ********* 2025-05-14 02:42:23.192882 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:42:23.192888 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:42:23.192894 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:42:23.192900 | orchestrator | 2025-05-14 02:42:23.192906 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-14 02:42:23.192913 | orchestrator | Wednesday 14 May 2025 02:41:41 +0000 (0:00:00.583) 0:01:44.391 ********* 2025-05-14 02:42:23.192919 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:23.192926 | orchestrator | 2025-05-14 02:42:23.192932 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-14 02:42:23.192938 | orchestrator | Wednesday 14 May 2025 02:41:43 +0000 (0:00:02.439) 0:01:46.830 ********* 2025-05-14 02:42:23.192945 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:23.192951 | orchestrator | 2025-05-14 02:42:23.192958 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-14 02:42:23.192965 | orchestrator | Wednesday 14 May 2025 02:41:46 +0000 (0:00:02.233) 0:01:49.064 ********* 2025-05-14 02:42:23.192971 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:23.192978 | orchestrator | 2025-05-14 02:42:23.192985 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-14 02:42:23.192991 | orchestrator | Wednesday 14 May 2025 02:41:57 +0000 (0:00:11.291) 0:02:00.356 ********* 2025-05-14 02:42:23.192998 | orchestrator | 2025-05-14 02:42:23.193004 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-14 02:42:23.193011 | orchestrator | Wednesday 14 May 2025 02:41:57 +0000 (0:00:00.053) 0:02:00.410 ********* 2025-05-14 02:42:23.193017 | orchestrator | 2025-05-14 02:42:23.193024 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-14 02:42:23.193030 | orchestrator | Wednesday 14 May 2025 02:41:57 +0000 (0:00:00.142) 0:02:00.552 ********* 2025-05-14 02:42:23.193037 | orchestrator | 2025-05-14 02:42:23.193043 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-14 02:42:23.193050 | orchestrator | Wednesday 14 May 2025 02:41:57 +0000 (0:00:00.052) 0:02:00.605 ********* 2025-05-14 02:42:23.193057 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:23.193063 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:42:23.193076 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:42:23.193082 | orchestrator | 2025-05-14 02:42:23.193088 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-14 02:42:23.193094 | orchestrator | Wednesday 14 May 2025 02:42:04 +0000 (0:00:06.991) 0:02:07.596 ********* 2025-05-14 02:42:23.193100 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:42:23.193107 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:23.193113 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:42:23.193119 | orchestrator | 2025-05-14 02:42:23.193125 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-14 02:42:23.193132 | orchestrator | Wednesday 14 May 2025 02:42:15 +0000 (0:00:11.094) 0:02:18.691 ********* 2025-05-14 02:42:23.193139 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:42:23.193145 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:42:23.193151 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:42:23.193158 | orchestrator | 2025-05-14 02:42:23.193164 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:42:23.193172 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:42:23.193181 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:42:23.193188 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:42:23.193194 | orchestrator | 2025-05-14 02:42:23.193201 | orchestrator | 2025-05-14 02:42:23.193208 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:42:23.193214 | orchestrator | Wednesday 14 May 2025 02:42:20 +0000 (0:00:05.151) 0:02:23.843 ********* 2025-05-14 02:42:23.193226 | orchestrator | =============================================================================== 2025-05-14 02:42:23.193233 | orchestrator | service-ks-register : barbican | Creating projects --------------------- 16.99s 2025-05-14 02:42:23.193240 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.36s 2025-05-14 02:42:23.193246 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.79s 2025-05-14 02:42:23.193253 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.29s 2025-05-14 02:42:23.193260 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.10s 2025-05-14 02:42:23.193267 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 7.03s 2025-05-14 02:42:23.193279 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.99s 2025-05-14 02:42:23.193286 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.81s 2025-05-14 02:42:23.193293 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.82s 2025-05-14 02:42:23.193300 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.15s 2025-05-14 02:42:23.193306 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.25s 2025-05-14 02:42:23.193313 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.92s 2025-05-14 02:42:23.193320 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.86s 2025-05-14 02:42:23.193328 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 3.61s 2025-05-14 02:42:23.193334 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.60s 2025-05-14 02:42:23.193341 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.03s 2025-05-14 02:42:23.193349 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.73s 2025-05-14 02:42:23.193358 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.68s 2025-05-14 02:42:23.193366 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.44s 2025-05-14 02:42:23.193381 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.23s 2025-05-14 02:42:23.193389 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:23.193397 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:23.193404 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:23.193412 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task 1a758990-fee6-418e-8224-c75f7748c2e4 is in state STARTED 2025-05-14 02:42:23.193419 | orchestrator | 2025-05-14 02:42:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:26.214870 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:26.214991 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:26.215912 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:26.216677 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:26.219073 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task 1a758990-fee6-418e-8224-c75f7748c2e4 is in state STARTED 2025-05-14 02:42:26.219135 | orchestrator | 2025-05-14 02:42:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:29.248129 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:29.248613 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task d3b4e56a-6b97-4917-872a-af8ccd5dd9d9 is in state STARTED 2025-05-14 02:42:29.249734 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:29.250210 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:29.251712 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:29.252433 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task 1a758990-fee6-418e-8224-c75f7748c2e4 is in state SUCCESS 2025-05-14 02:42:29.252469 | orchestrator | 2025-05-14 02:42:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:32.288384 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:42:32.288728 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:32.289237 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task d3b4e56a-6b97-4917-872a-af8ccd5dd9d9 is in state STARTED 2025-05-14 02:42:32.290128 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:32.290633 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:32.291527 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:32.291552 | orchestrator | 2025-05-14 02:42:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:35.333537 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:42:35.334428 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:35.335621 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task d3b4e56a-6b97-4917-872a-af8ccd5dd9d9 is in state STARTED 2025-05-14 02:42:35.336094 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:35.336668 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:35.337295 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:35.338207 | orchestrator | 2025-05-14 02:42:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:38.366323 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:42:38.366606 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:38.367070 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task d3b4e56a-6b97-4917-872a-af8ccd5dd9d9 is in state SUCCESS 2025-05-14 02:42:38.367652 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:38.368247 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:38.370801 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:38.370851 | orchestrator | 2025-05-14 02:42:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:41.399456 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:42:41.400395 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:41.401394 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:41.403467 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:41.403523 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:41.403530 | orchestrator | 2025-05-14 02:42:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:44.436702 | orchestrator | 2025-05-14 02:42:44 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:42:44.436840 | orchestrator | 2025-05-14 02:42:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:44.437390 | orchestrator | 2025-05-14 02:42:44 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:44.438337 | orchestrator | 2025-05-14 02:42:44 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:44.438781 | orchestrator | 2025-05-14 02:42:44 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:44.438821 | orchestrator | 2025-05-14 02:42:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:47.468666 | orchestrator | 2025-05-14 02:42:47 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:42:47.469454 | orchestrator | 2025-05-14 02:42:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:47.470101 | orchestrator | 2025-05-14 02:42:47 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:47.471078 | orchestrator | 2025-05-14 02:42:47 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:47.472520 | orchestrator | 2025-05-14 02:42:47 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:47.472603 | orchestrator | 2025-05-14 02:42:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:50.508174 | orchestrator | 2025-05-14 02:42:50 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:42:50.508286 | orchestrator | 2025-05-14 02:42:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:50.508299 | orchestrator | 2025-05-14 02:42:50 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:50.508309 | orchestrator | 2025-05-14 02:42:50 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:50.508317 | orchestrator | 2025-05-14 02:42:50 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:50.508325 | orchestrator | 2025-05-14 02:42:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:53.550933 | orchestrator | 2025-05-14 02:42:53 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:42:53.552641 | orchestrator | 2025-05-14 02:42:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:53.553086 | orchestrator | 2025-05-14 02:42:53 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:53.553655 | orchestrator | 2025-05-14 02:42:53 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:53.554272 | orchestrator | 2025-05-14 02:42:53 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:53.554310 | orchestrator | 2025-05-14 02:42:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:56.586867 | orchestrator | 2025-05-14 02:42:56 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:42:56.587060 | orchestrator | 2025-05-14 02:42:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:56.588073 | orchestrator | 2025-05-14 02:42:56 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:56.588623 | orchestrator | 2025-05-14 02:42:56 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:56.589286 | orchestrator | 2025-05-14 02:42:56 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:56.589311 | orchestrator | 2025-05-14 02:42:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:59.622814 | orchestrator | 2025-05-14 02:42:59 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:42:59.623050 | orchestrator | 2025-05-14 02:42:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:42:59.623693 | orchestrator | 2025-05-14 02:42:59 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:42:59.624565 | orchestrator | 2025-05-14 02:42:59 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:42:59.625081 | orchestrator | 2025-05-14 02:42:59 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:42:59.625168 | orchestrator | 2025-05-14 02:42:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:02.667015 | orchestrator | 2025-05-14 02:43:02 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:02.667374 | orchestrator | 2025-05-14 02:43:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:02.668864 | orchestrator | 2025-05-14 02:43:02 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:43:02.669977 | orchestrator | 2025-05-14 02:43:02 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:02.671280 | orchestrator | 2025-05-14 02:43:02 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:02.671374 | orchestrator | 2025-05-14 02:43:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:05.709977 | orchestrator | 2025-05-14 02:43:05 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:05.710185 | orchestrator | 2025-05-14 02:43:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:05.710206 | orchestrator | 2025-05-14 02:43:05 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:43:05.710380 | orchestrator | 2025-05-14 02:43:05 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:05.711031 | orchestrator | 2025-05-14 02:43:05 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:05.711067 | orchestrator | 2025-05-14 02:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:08.742674 | orchestrator | 2025-05-14 02:43:08 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:08.743224 | orchestrator | 2025-05-14 02:43:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:08.743727 | orchestrator | 2025-05-14 02:43:08 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:43:08.744958 | orchestrator | 2025-05-14 02:43:08 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:08.745302 | orchestrator | 2025-05-14 02:43:08 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:08.750412 | orchestrator | 2025-05-14 02:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:11.774371 | orchestrator | 2025-05-14 02:43:11 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:11.775026 | orchestrator | 2025-05-14 02:43:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:11.775642 | orchestrator | 2025-05-14 02:43:11 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:43:11.777199 | orchestrator | 2025-05-14 02:43:11 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:11.777746 | orchestrator | 2025-05-14 02:43:11 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:11.777771 | orchestrator | 2025-05-14 02:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:14.804854 | orchestrator | 2025-05-14 02:43:14 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:14.805383 | orchestrator | 2025-05-14 02:43:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:14.805978 | orchestrator | 2025-05-14 02:43:14 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:43:14.806610 | orchestrator | 2025-05-14 02:43:14 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:14.807186 | orchestrator | 2025-05-14 02:43:14 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:14.807226 | orchestrator | 2025-05-14 02:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:17.838351 | orchestrator | 2025-05-14 02:43:17 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:17.838606 | orchestrator | 2025-05-14 02:43:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:17.838712 | orchestrator | 2025-05-14 02:43:17 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state STARTED 2025-05-14 02:43:17.838988 | orchestrator | 2025-05-14 02:43:17 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:17.839567 | orchestrator | 2025-05-14 02:43:17 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:17.839590 | orchestrator | 2025-05-14 02:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:20.866382 | orchestrator | 2025-05-14 02:43:20 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:20.866738 | orchestrator | 2025-05-14 02:43:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:20.866764 | orchestrator | 2025-05-14 02:43:20 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:20.867736 | orchestrator | 2025-05-14 02:43:20 | INFO  | Task a11d604e-4f60-466b-8edf-fdcf111ac355 is in state SUCCESS 2025-05-14 02:43:20.869188 | orchestrator | 2025-05-14 02:43:20.869219 | orchestrator | 2025-05-14 02:43:20.869227 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:43:20.869234 | orchestrator | 2025-05-14 02:43:20.869241 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:43:20.869249 | orchestrator | Wednesday 14 May 2025 02:42:26 +0000 (0:00:00.329) 0:00:00.329 ********* 2025-05-14 02:43:20.869256 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:43:20.869295 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:43:20.869302 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:43:20.869309 | orchestrator | 2025-05-14 02:43:20.869316 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:43:20.869323 | orchestrator | Wednesday 14 May 2025 02:42:26 +0000 (0:00:00.421) 0:00:00.751 ********* 2025-05-14 02:43:20.869330 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-14 02:43:20.869353 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-14 02:43:20.869375 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-14 02:43:20.869413 | orchestrator | 2025-05-14 02:43:20.869420 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-14 02:43:20.869426 | orchestrator | 2025-05-14 02:43:20.869526 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-14 02:43:20.869544 | orchestrator | Wednesday 14 May 2025 02:42:27 +0000 (0:00:00.868) 0:00:01.619 ********* 2025-05-14 02:43:20.869551 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:43:20.869558 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:43:20.869564 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:43:20.869570 | orchestrator | 2025-05-14 02:43:20.869577 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:43:20.869586 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:43:20.869595 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:43:20.869602 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:43:20.869609 | orchestrator | 2025-05-14 02:43:20.869616 | orchestrator | 2025-05-14 02:43:20.869623 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:43:20.869630 | orchestrator | Wednesday 14 May 2025 02:42:28 +0000 (0:00:00.954) 0:00:02.574 ********* 2025-05-14 02:43:20.869638 | orchestrator | =============================================================================== 2025-05-14 02:43:20.869645 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.95s 2025-05-14 02:43:20.869671 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2025-05-14 02:43:20.869677 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-05-14 02:43:20.869685 | orchestrator | 2025-05-14 02:43:20.869691 | orchestrator | None 2025-05-14 02:43:20.869698 | orchestrator | 2025-05-14 02:43:20.869704 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:43:20.869711 | orchestrator | 2025-05-14 02:43:20.869718 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:43:20.869724 | orchestrator | Wednesday 14 May 2025 02:39:57 +0000 (0:00:00.347) 0:00:00.347 ********* 2025-05-14 02:43:20.869730 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:43:20.869736 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:43:20.869742 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:43:20.869748 | orchestrator | 2025-05-14 02:43:20.869755 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:43:20.869761 | orchestrator | Wednesday 14 May 2025 02:39:57 +0000 (0:00:00.541) 0:00:00.889 ********* 2025-05-14 02:43:20.869768 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-14 02:43:20.869775 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-14 02:43:20.869782 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-14 02:43:20.869789 | orchestrator | 2025-05-14 02:43:20.869797 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-14 02:43:20.869804 | orchestrator | 2025-05-14 02:43:20.869811 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 02:43:20.869819 | orchestrator | Wednesday 14 May 2025 02:39:58 +0000 (0:00:00.376) 0:00:01.266 ********* 2025-05-14 02:43:20.869827 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:43:20.869834 | orchestrator | 2025-05-14 02:43:20.869842 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-14 02:43:20.869850 | orchestrator | Wednesday 14 May 2025 02:39:58 +0000 (0:00:00.790) 0:00:02.056 ********* 2025-05-14 02:43:20.869857 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-14 02:43:20.869865 | orchestrator | 2025-05-14 02:43:20.869872 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-14 02:43:20.869880 | orchestrator | Wednesday 14 May 2025 02:40:02 +0000 (0:00:03.761) 0:00:05.818 ********* 2025-05-14 02:43:20.869887 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-14 02:43:20.869895 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-14 02:43:20.869902 | orchestrator | 2025-05-14 02:43:20.869908 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-14 02:43:20.869915 | orchestrator | Wednesday 14 May 2025 02:40:09 +0000 (0:00:07.218) 0:00:13.036 ********* 2025-05-14 02:43:20.869921 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-14 02:43:20.869928 | orchestrator | 2025-05-14 02:43:20.869934 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-14 02:43:20.869941 | orchestrator | Wednesday 14 May 2025 02:40:13 +0000 (0:00:03.610) 0:00:16.646 ********* 2025-05-14 02:43:20.869958 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:43:20.869966 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-14 02:43:20.869972 | orchestrator | 2025-05-14 02:43:20.869979 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-14 02:43:20.869985 | orchestrator | Wednesday 14 May 2025 02:40:17 +0000 (0:00:04.300) 0:00:20.947 ********* 2025-05-14 02:43:20.869992 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:43:20.869999 | orchestrator | 2025-05-14 02:43:20.870006 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-14 02:43:20.870089 | orchestrator | Wednesday 14 May 2025 02:40:21 +0000 (0:00:03.398) 0:00:24.346 ********* 2025-05-14 02:43:20.870100 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-14 02:43:20.870106 | orchestrator | 2025-05-14 02:43:20.870113 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-14 02:43:20.870187 | orchestrator | Wednesday 14 May 2025 02:40:25 +0000 (0:00:04.526) 0:00:28.872 ********* 2025-05-14 02:43:20.870199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.870224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.870231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.870239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870457 | orchestrator | 2025-05-14 02:43:20.870464 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-14 02:43:20.870488 | orchestrator | Wednesday 14 May 2025 02:40:28 +0000 (0:00:03.182) 0:00:32.054 ********* 2025-05-14 02:43:20.870495 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:20.870502 | orchestrator | 2025-05-14 02:43:20.870509 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-14 02:43:20.870515 | orchestrator | Wednesday 14 May 2025 02:40:29 +0000 (0:00:00.117) 0:00:32.172 ********* 2025-05-14 02:43:20.870521 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:20.870527 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:20.870534 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:20.870541 | orchestrator | 2025-05-14 02:43:20.870548 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 02:43:20.870555 | orchestrator | Wednesday 14 May 2025 02:40:29 +0000 (0:00:00.434) 0:00:32.607 ********* 2025-05-14 02:43:20.870562 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:43:20.870569 | orchestrator | 2025-05-14 02:43:20.870575 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-14 02:43:20.870582 | orchestrator | Wednesday 14 May 2025 02:40:30 +0000 (0:00:00.602) 0:00:33.210 ********* 2025-05-14 02:43:20.870588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.870612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.870621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.870627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.870811 | orchestrator | 2025-05-14 02:43:20.870818 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-14 02:43:20.870829 | orchestrator | Wednesday 14 May 2025 02:40:36 +0000 (0:00:06.545) 0:00:39.755 ********* 2025-05-14 02:43:20.870836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.870886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:43:20.870898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870931 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:20.870938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.870951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:43:20.870958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.870992 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:20.870999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.871045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:43:20.871054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871089 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:20.871096 | orchestrator | 2025-05-14 02:43:20.871103 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-14 02:43:20.871110 | orchestrator | Wednesday 14 May 2025 02:40:37 +0000 (0:00:01.273) 0:00:41.029 ********* 2025-05-14 02:43:20.871117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.871129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:43:20.871141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871174 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:20.871180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.871187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:43:20.871210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871246 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:20.871253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.871261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:43:20.871273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871309 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:20.871315 | orchestrator | 2025-05-14 02:43:20.871322 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-14 02:43:20.871329 | orchestrator | Wednesday 14 May 2025 02:40:38 +0000 (0:00:01.115) 0:00:42.144 ********* 2025-05-14 02:43:20.871336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.871349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.871362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.871370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871569 | orchestrator | 2025-05-14 02:43:20.871576 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-14 02:43:20.871583 | orchestrator | Wednesday 14 May 2025 02:40:45 +0000 (0:00:06.240) 0:00:48.385 ********* 2025-05-14 02:43:20.871598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.871610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.871616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.871623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871785 | orchestrator | 2025-05-14 02:43:20.871791 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-14 02:43:20.871798 | orchestrator | Wednesday 14 May 2025 02:41:15 +0000 (0:00:30.254) 0:01:18.639 ********* 2025-05-14 02:43:20.871805 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-14 02:43:20.871812 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-14 02:43:20.871818 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-14 02:43:20.871824 | orchestrator | 2025-05-14 02:43:20.871831 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-14 02:43:20.871838 | orchestrator | Wednesday 14 May 2025 02:41:24 +0000 (0:00:08.678) 0:01:27.317 ********* 2025-05-14 02:43:20.871844 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-14 02:43:20.871851 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-14 02:43:20.871857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-14 02:43:20.871864 | orchestrator | 2025-05-14 02:43:20.871871 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-14 02:43:20.871878 | orchestrator | Wednesday 14 May 2025 02:41:31 +0000 (0:00:07.063) 0:01:34.380 ********* 2025-05-14 02:43:20.871885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.871898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.871915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.871923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.871992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.871999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872080 | orchestrator | 2025-05-14 02:43:20.872087 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-14 02:43:20.872093 | orchestrator | Wednesday 14 May 2025 02:41:35 +0000 (0:00:03.953) 0:01:38.334 ********* 2025-05-14 02:43:20.872107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.872114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.872121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.872128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872291 | orchestrator | 2025-05-14 02:43:20.872297 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 02:43:20.872303 | orchestrator | Wednesday 14 May 2025 02:41:38 +0000 (0:00:03.162) 0:01:41.497 ********* 2025-05-14 02:43:20.872309 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:20.872315 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:20.872321 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:20.872327 | orchestrator | 2025-05-14 02:43:20.872332 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-14 02:43:20.872337 | orchestrator | Wednesday 14 May 2025 02:41:38 +0000 (0:00:00.629) 0:01:42.126 ********* 2025-05-14 02:43:20.872343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.872356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:43:20.872362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.872516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:43:20.872538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872556 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:20.872567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872599 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:20.872606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:43:20.872616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:43:20.872626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872665 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:20.872672 | orchestrator | 2025-05-14 02:43:20.872679 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-14 02:43:20.872685 | orchestrator | Wednesday 14 May 2025 02:41:41 +0000 (0:00:02.198) 0:01:44.325 ********* 2025-05-14 02:43:20.872695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.872705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.872712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:43:20.872723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:20.872864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:43:20.872872 | orchestrator | 2025-05-14 02:43:20.872881 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 02:43:20.872895 | orchestrator | Wednesday 14 May 2025 02:41:46 +0000 (0:00:05.302) 0:01:49.628 ********* 2025-05-14 02:43:20.872902 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:20.872924 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:20.872931 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:20.872937 | orchestrator | 2025-05-14 02:43:20.872944 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-14 02:43:20.872950 | orchestrator | Wednesday 14 May 2025 02:41:47 +0000 (0:00:00.931) 0:01:50.561 ********* 2025-05-14 02:43:20.872958 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-14 02:43:20.872964 | orchestrator | 2025-05-14 02:43:20.872970 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-14 02:43:20.872977 | orchestrator | Wednesday 14 May 2025 02:41:49 +0000 (0:00:02.430) 0:01:52.992 ********* 2025-05-14 02:43:20.872984 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:43:20.872991 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-14 02:43:20.872998 | orchestrator | 2025-05-14 02:43:20.873005 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-14 02:43:20.873012 | orchestrator | Wednesday 14 May 2025 02:41:52 +0000 (0:00:02.480) 0:01:55.472 ********* 2025-05-14 02:43:20.873018 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:20.873024 | orchestrator | 2025-05-14 02:43:20.873031 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-14 02:43:20.873036 | orchestrator | Wednesday 14 May 2025 02:42:07 +0000 (0:00:14.739) 0:02:10.212 ********* 2025-05-14 02:43:20.873043 | orchestrator | 2025-05-14 02:43:20.873049 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-14 02:43:20.873055 | orchestrator | Wednesday 14 May 2025 02:42:07 +0000 (0:00:00.045) 0:02:10.257 ********* 2025-05-14 02:43:20.873062 | orchestrator | 2025-05-14 02:43:20.873069 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-14 02:43:20.873075 | orchestrator | Wednesday 14 May 2025 02:42:07 +0000 (0:00:00.040) 0:02:10.298 ********* 2025-05-14 02:43:20.873082 | orchestrator | 2025-05-14 02:43:20.873089 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-14 02:43:20.873096 | orchestrator | Wednesday 14 May 2025 02:42:07 +0000 (0:00:00.043) 0:02:10.341 ********* 2025-05-14 02:43:20.873102 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:20.873109 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:20.873116 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:20.873123 | orchestrator | 2025-05-14 02:43:20.873129 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-14 02:43:20.873136 | orchestrator | Wednesday 14 May 2025 02:42:21 +0000 (0:00:13.841) 0:02:24.182 ********* 2025-05-14 02:43:20.873143 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:20.873149 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:20.873156 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:20.873162 | orchestrator | 2025-05-14 02:43:20.873169 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-14 02:43:20.873175 | orchestrator | Wednesday 14 May 2025 02:42:34 +0000 (0:00:12.998) 0:02:37.181 ********* 2025-05-14 02:43:20.873182 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:20.873188 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:20.873195 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:20.873201 | orchestrator | 2025-05-14 02:43:20.873208 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-14 02:43:20.873214 | orchestrator | Wednesday 14 May 2025 02:42:42 +0000 (0:00:08.239) 0:02:45.420 ********* 2025-05-14 02:43:20.873220 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:20.873227 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:20.873233 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:20.873239 | orchestrator | 2025-05-14 02:43:20.873246 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-14 02:43:20.873258 | orchestrator | Wednesday 14 May 2025 02:42:50 +0000 (0:00:08.247) 0:02:53.668 ********* 2025-05-14 02:43:20.873265 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:20.873272 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:20.873278 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:20.873284 | orchestrator | 2025-05-14 02:43:20.873291 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-14 02:43:20.873297 | orchestrator | Wednesday 14 May 2025 02:43:04 +0000 (0:00:14.256) 0:03:07.925 ********* 2025-05-14 02:43:20.873303 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:20.873310 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:20.873317 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:20.873324 | orchestrator | 2025-05-14 02:43:20.873331 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-14 02:43:20.873339 | orchestrator | Wednesday 14 May 2025 02:43:12 +0000 (0:00:08.021) 0:03:15.947 ********* 2025-05-14 02:43:20.873346 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:20.873353 | orchestrator | 2025-05-14 02:43:20.873359 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:43:20.873371 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:43:20.873379 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:43:20.873385 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:43:20.873392 | orchestrator | 2025-05-14 02:43:20.873398 | orchestrator | 2025-05-14 02:43:20.873405 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:43:20.873412 | orchestrator | Wednesday 14 May 2025 02:43:18 +0000 (0:00:05.764) 0:03:21.711 ********* 2025-05-14 02:43:20.873419 | orchestrator | =============================================================================== 2025-05-14 02:43:20.873430 | orchestrator | designate : Copying over designate.conf -------------------------------- 30.25s 2025-05-14 02:43:20.873437 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.74s 2025-05-14 02:43:20.873443 | orchestrator | designate : Restart designate-mdns container --------------------------- 14.26s 2025-05-14 02:43:20.873450 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.84s 2025-05-14 02:43:20.873457 | orchestrator | designate : Restart designate-api container ---------------------------- 13.00s 2025-05-14 02:43:20.873464 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.68s 2025-05-14 02:43:20.873485 | orchestrator | designate : Restart designate-producer container ------------------------ 8.25s 2025-05-14 02:43:20.873492 | orchestrator | designate : Restart designate-central container ------------------------- 8.24s 2025-05-14 02:43:20.873499 | orchestrator | designate : Restart designate-worker container -------------------------- 8.02s 2025-05-14 02:43:20.873506 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.22s 2025-05-14 02:43:20.873512 | orchestrator | designate : Copying over named.conf ------------------------------------- 7.06s 2025-05-14 02:43:20.873519 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.55s 2025-05-14 02:43:20.873525 | orchestrator | designate : Copying over config.json files for services ----------------- 6.24s 2025-05-14 02:43:20.873531 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.76s 2025-05-14 02:43:20.873538 | orchestrator | designate : Check designate containers ---------------------------------- 5.30s 2025-05-14 02:43:20.873544 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.53s 2025-05-14 02:43:20.873551 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.30s 2025-05-14 02:43:20.873566 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.95s 2025-05-14 02:43:20.873573 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.76s 2025-05-14 02:43:20.873579 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.61s 2025-05-14 02:43:20.873586 | orchestrator | 2025-05-14 02:43:20 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:20.873592 | orchestrator | 2025-05-14 02:43:20 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:20.873599 | orchestrator | 2025-05-14 02:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:23.891434 | orchestrator | 2025-05-14 02:43:23 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:23.891606 | orchestrator | 2025-05-14 02:43:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:23.891776 | orchestrator | 2025-05-14 02:43:23 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:23.892193 | orchestrator | 2025-05-14 02:43:23 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:23.892659 | orchestrator | 2025-05-14 02:43:23 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:23.892681 | orchestrator | 2025-05-14 02:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:26.918539 | orchestrator | 2025-05-14 02:43:26 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:26.918654 | orchestrator | 2025-05-14 02:43:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:26.920041 | orchestrator | 2025-05-14 02:43:26 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:26.920709 | orchestrator | 2025-05-14 02:43:26 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:26.922228 | orchestrator | 2025-05-14 02:43:26 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:26.922293 | orchestrator | 2025-05-14 02:43:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:29.978787 | orchestrator | 2025-05-14 02:43:29 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:29.981221 | orchestrator | 2025-05-14 02:43:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:29.983037 | orchestrator | 2025-05-14 02:43:29 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:29.984899 | orchestrator | 2025-05-14 02:43:29 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:29.987361 | orchestrator | 2025-05-14 02:43:29 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:29.987416 | orchestrator | 2025-05-14 02:43:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:33.043421 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:33.047411 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:33.048243 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:33.049222 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:33.050394 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:33.050507 | orchestrator | 2025-05-14 02:43:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:36.102152 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:36.103139 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:36.105156 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:36.106842 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:36.108313 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:36.108362 | orchestrator | 2025-05-14 02:43:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:39.156068 | orchestrator | 2025-05-14 02:43:39 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:39.156910 | orchestrator | 2025-05-14 02:43:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:39.157507 | orchestrator | 2025-05-14 02:43:39 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:39.158312 | orchestrator | 2025-05-14 02:43:39 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:39.159006 | orchestrator | 2025-05-14 02:43:39 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:39.159075 | orchestrator | 2025-05-14 02:43:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:42.216784 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:42.217518 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:42.219863 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:42.221150 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:42.222803 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:42.222860 | orchestrator | 2025-05-14 02:43:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:45.268562 | orchestrator | 2025-05-14 02:43:45 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:45.269721 | orchestrator | 2025-05-14 02:43:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:45.270296 | orchestrator | 2025-05-14 02:43:45 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:45.272087 | orchestrator | 2025-05-14 02:43:45 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:45.272921 | orchestrator | 2025-05-14 02:43:45 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:45.272969 | orchestrator | 2025-05-14 02:43:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:48.320941 | orchestrator | 2025-05-14 02:43:48 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:48.321049 | orchestrator | 2025-05-14 02:43:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:48.321205 | orchestrator | 2025-05-14 02:43:48 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:48.321543 | orchestrator | 2025-05-14 02:43:48 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:48.322082 | orchestrator | 2025-05-14 02:43:48 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:48.322635 | orchestrator | 2025-05-14 02:43:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:51.353801 | orchestrator | 2025-05-14 02:43:51 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:51.357542 | orchestrator | 2025-05-14 02:43:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:51.357880 | orchestrator | 2025-05-14 02:43:51 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:51.358898 | orchestrator | 2025-05-14 02:43:51 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:51.360353 | orchestrator | 2025-05-14 02:43:51 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:51.360394 | orchestrator | 2025-05-14 02:43:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:54.398003 | orchestrator | 2025-05-14 02:43:54 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:54.398192 | orchestrator | 2025-05-14 02:43:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:54.399686 | orchestrator | 2025-05-14 02:43:54 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state STARTED 2025-05-14 02:43:54.400202 | orchestrator | 2025-05-14 02:43:54 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:54.400761 | orchestrator | 2025-05-14 02:43:54 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:54.400842 | orchestrator | 2025-05-14 02:43:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:57.426307 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:43:57.427757 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:43:57.430212 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:43:57.432712 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task c6b5a286-642e-47e1-ac45-5dcf4ac1550a is in state SUCCESS 2025-05-14 02:43:57.434984 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:43:57.436917 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:43:57.437145 | orchestrator | 2025-05-14 02:43:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:00.483093 | orchestrator | 2025-05-14 02:44:00 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:00.483210 | orchestrator | 2025-05-14 02:44:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:00.483753 | orchestrator | 2025-05-14 02:44:00 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:00.484691 | orchestrator | 2025-05-14 02:44:00 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:44:00.485626 | orchestrator | 2025-05-14 02:44:00 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:00.485665 | orchestrator | 2025-05-14 02:44:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:03.521609 | orchestrator | 2025-05-14 02:44:03 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:03.523125 | orchestrator | 2025-05-14 02:44:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:03.530610 | orchestrator | 2025-05-14 02:44:03 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:03.532102 | orchestrator | 2025-05-14 02:44:03 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:44:03.534283 | orchestrator | 2025-05-14 02:44:03 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:03.534321 | orchestrator | 2025-05-14 02:44:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:06.584391 | orchestrator | 2025-05-14 02:44:06 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:06.586532 | orchestrator | 2025-05-14 02:44:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:06.587644 | orchestrator | 2025-05-14 02:44:06 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:06.589908 | orchestrator | 2025-05-14 02:44:06 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state STARTED 2025-05-14 02:44:06.591576 | orchestrator | 2025-05-14 02:44:06 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:06.591622 | orchestrator | 2025-05-14 02:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:09.632700 | orchestrator | 2025-05-14 02:44:09 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:09.632905 | orchestrator | 2025-05-14 02:44:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:09.633206 | orchestrator | 2025-05-14 02:44:09 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:09.634053 | orchestrator | 2025-05-14 02:44:09 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:09.637273 | orchestrator | 2025-05-14 02:44:09.637326 | orchestrator | 2025-05-14 02:44:09.637336 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:44:09.637346 | orchestrator | 2025-05-14 02:44:09.637353 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:44:09.637360 | orchestrator | Wednesday 14 May 2025 02:43:25 +0000 (0:00:00.552) 0:00:00.552 ********* 2025-05-14 02:44:09.637368 | orchestrator | ok: [testbed-manager] 2025-05-14 02:44:09.637376 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:44:09.637383 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:44:09.637389 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:44:09.637395 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:44:09.637401 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:44:09.637408 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:44:09.637414 | orchestrator | 2025-05-14 02:44:09.637421 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:44:09.637428 | orchestrator | Wednesday 14 May 2025 02:43:26 +0000 (0:00:00.801) 0:00:01.354 ********* 2025-05-14 02:44:09.637436 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-14 02:44:09.637443 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-14 02:44:09.637474 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-14 02:44:09.637481 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-14 02:44:09.637487 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-14 02:44:09.637495 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-14 02:44:09.637501 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-14 02:44:09.637507 | orchestrator | 2025-05-14 02:44:09.637514 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-14 02:44:09.637543 | orchestrator | 2025-05-14 02:44:09.637550 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-14 02:44:09.637557 | orchestrator | Wednesday 14 May 2025 02:43:26 +0000 (0:00:00.698) 0:00:02.052 ********* 2025-05-14 02:44:09.637565 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:44:09.637609 | orchestrator | 2025-05-14 02:44:09.637617 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-14 02:44:09.637623 | orchestrator | Wednesday 14 May 2025 02:43:28 +0000 (0:00:01.184) 0:00:03.236 ********* 2025-05-14 02:44:09.637630 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-05-14 02:44:09.637637 | orchestrator | 2025-05-14 02:44:09.637644 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-14 02:44:09.637746 | orchestrator | Wednesday 14 May 2025 02:43:31 +0000 (0:00:03.151) 0:00:06.388 ********* 2025-05-14 02:44:09.637755 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-14 02:44:09.637763 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-14 02:44:09.637771 | orchestrator | 2025-05-14 02:44:09.637777 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-14 02:44:09.637785 | orchestrator | Wednesday 14 May 2025 02:43:37 +0000 (0:00:05.836) 0:00:12.224 ********* 2025-05-14 02:44:09.637793 | orchestrator | ok: [testbed-manager] => (item=service) 2025-05-14 02:44:09.637800 | orchestrator | 2025-05-14 02:44:09.637807 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-14 02:44:09.637814 | orchestrator | Wednesday 14 May 2025 02:43:39 +0000 (0:00:02.809) 0:00:15.033 ********* 2025-05-14 02:44:09.637821 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:44:09.637829 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-05-14 02:44:09.637836 | orchestrator | 2025-05-14 02:44:09.637843 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-14 02:44:09.637850 | orchestrator | Wednesday 14 May 2025 02:43:43 +0000 (0:00:03.447) 0:00:18.481 ********* 2025-05-14 02:44:09.637857 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-05-14 02:44:09.637881 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-05-14 02:44:09.637888 | orchestrator | 2025-05-14 02:44:09.637894 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-14 02:44:09.637900 | orchestrator | Wednesday 14 May 2025 02:43:49 +0000 (0:00:05.991) 0:00:24.473 ********* 2025-05-14 02:44:09.637906 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-05-14 02:44:09.637912 | orchestrator | 2025-05-14 02:44:09.637918 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:44:09.637925 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:44:09.637945 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:44:09.637953 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:44:09.637960 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:44:09.637966 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:44:09.637987 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:44:09.638002 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:44:09.638009 | orchestrator | 2025-05-14 02:44:09.638056 | orchestrator | 2025-05-14 02:44:09.638064 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:44:09.638071 | orchestrator | Wednesday 14 May 2025 02:43:54 +0000 (0:00:05.593) 0:00:30.067 ********* 2025-05-14 02:44:09.638077 | orchestrator | =============================================================================== 2025-05-14 02:44:09.638084 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.99s 2025-05-14 02:44:09.638091 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.84s 2025-05-14 02:44:09.638098 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.59s 2025-05-14 02:44:09.638105 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.45s 2025-05-14 02:44:09.638112 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.15s 2025-05-14 02:44:09.638119 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.81s 2025-05-14 02:44:09.638126 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.18s 2025-05-14 02:44:09.638133 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2025-05-14 02:44:09.638139 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-05-14 02:44:09.638146 | orchestrator | 2025-05-14 02:44:09.638153 | orchestrator | 2025-05-14 02:44:09.638160 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:44:09.638167 | orchestrator | 2025-05-14 02:44:09.638174 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:44:09.638181 | orchestrator | Wednesday 14 May 2025 02:42:07 +0000 (0:00:00.700) 0:00:00.700 ********* 2025-05-14 02:44:09.638187 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:44:09.638195 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:44:09.638201 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:44:09.638208 | orchestrator | 2025-05-14 02:44:09.638215 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:44:09.638222 | orchestrator | Wednesday 14 May 2025 02:42:08 +0000 (0:00:00.812) 0:00:01.512 ********* 2025-05-14 02:44:09.638229 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-14 02:44:09.638236 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-14 02:44:09.638243 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-14 02:44:09.638250 | orchestrator | 2025-05-14 02:44:09.638257 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-14 02:44:09.638264 | orchestrator | 2025-05-14 02:44:09.638271 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-14 02:44:09.638278 | orchestrator | Wednesday 14 May 2025 02:42:08 +0000 (0:00:00.493) 0:00:02.006 ********* 2025-05-14 02:44:09.638284 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:44:09.638292 | orchestrator | 2025-05-14 02:44:09.638299 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-14 02:44:09.638306 | orchestrator | Wednesday 14 May 2025 02:42:09 +0000 (0:00:01.073) 0:00:03.079 ********* 2025-05-14 02:44:09.638313 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-14 02:44:09.638319 | orchestrator | 2025-05-14 02:44:09.638325 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-14 02:44:09.638331 | orchestrator | Wednesday 14 May 2025 02:42:13 +0000 (0:00:03.935) 0:00:07.014 ********* 2025-05-14 02:44:09.638337 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-14 02:44:09.638343 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-14 02:44:09.638355 | orchestrator | 2025-05-14 02:44:09.638362 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-14 02:44:09.638369 | orchestrator | Wednesday 14 May 2025 02:42:20 +0000 (0:00:06.625) 0:00:13.640 ********* 2025-05-14 02:44:09.638375 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:44:09.638382 | orchestrator | 2025-05-14 02:44:09.638389 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-14 02:44:09.638396 | orchestrator | Wednesday 14 May 2025 02:42:24 +0000 (0:00:03.655) 0:00:17.296 ********* 2025-05-14 02:44:09.638403 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:44:09.638410 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-14 02:44:09.638417 | orchestrator | 2025-05-14 02:44:09.638424 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-14 02:44:09.638436 | orchestrator | Wednesday 14 May 2025 02:42:28 +0000 (0:00:04.397) 0:00:21.694 ********* 2025-05-14 02:44:09.638443 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:44:09.638450 | orchestrator | 2025-05-14 02:44:09.638505 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-14 02:44:09.638513 | orchestrator | Wednesday 14 May 2025 02:42:31 +0000 (0:00:03.464) 0:00:25.158 ********* 2025-05-14 02:44:09.638520 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-14 02:44:09.638527 | orchestrator | 2025-05-14 02:44:09.638533 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-14 02:44:09.638540 | orchestrator | Wednesday 14 May 2025 02:42:37 +0000 (0:00:05.473) 0:00:30.631 ********* 2025-05-14 02:44:09.638547 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:09.638554 | orchestrator | 2025-05-14 02:44:09.638561 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-14 02:44:09.638574 | orchestrator | Wednesday 14 May 2025 02:42:40 +0000 (0:00:03.344) 0:00:33.976 ********* 2025-05-14 02:44:09.638580 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:09.638587 | orchestrator | 2025-05-14 02:44:09.638592 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-14 02:44:09.638598 | orchestrator | Wednesday 14 May 2025 02:42:44 +0000 (0:00:04.135) 0:00:38.111 ********* 2025-05-14 02:44:09.638605 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:09.638612 | orchestrator | 2025-05-14 02:44:09.638618 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-14 02:44:09.638623 | orchestrator | Wednesday 14 May 2025 02:42:49 +0000 (0:00:04.131) 0:00:42.243 ********* 2025-05-14 02:44:09.638633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.638644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.638658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.638669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.638684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.638692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.638699 | orchestrator | 2025-05-14 02:44:09.638727 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-14 02:44:09.638734 | orchestrator | Wednesday 14 May 2025 02:42:52 +0000 (0:00:03.254) 0:00:45.501 ********* 2025-05-14 02:44:09.638741 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:09.638748 | orchestrator | 2025-05-14 02:44:09.638755 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-14 02:44:09.638767 | orchestrator | Wednesday 14 May 2025 02:42:52 +0000 (0:00:00.168) 0:00:45.670 ********* 2025-05-14 02:44:09.638774 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:09.638780 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:09.638787 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:09.638794 | orchestrator | 2025-05-14 02:44:09.638801 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-14 02:44:09.638808 | orchestrator | Wednesday 14 May 2025 02:42:53 +0000 (0:00:00.683) 0:00:46.353 ********* 2025-05-14 02:44:09.638816 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:44:09.638822 | orchestrator | 2025-05-14 02:44:09.638829 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-14 02:44:09.638836 | orchestrator | Wednesday 14 May 2025 02:42:54 +0000 (0:00:01.662) 0:00:48.015 ********* 2025-05-14 02:44:09.638842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.638852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.638860 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:09.638873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.638880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.638891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.638898 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:09.638906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.638913 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:09.638920 | orchestrator | 2025-05-14 02:44:09.638926 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-14 02:44:09.638933 | orchestrator | Wednesday 14 May 2025 02:42:56 +0000 (0:00:01.864) 0:00:49.879 ********* 2025-05-14 02:44:09.638939 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:09.638945 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:09.638952 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:09.638958 | orchestrator | 2025-05-14 02:44:09.638968 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-14 02:44:09.638975 | orchestrator | Wednesday 14 May 2025 02:42:57 +0000 (0:00:00.674) 0:00:50.554 ********* 2025-05-14 02:44:09.638982 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:44:09.638989 | orchestrator | 2025-05-14 02:44:09.638996 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-14 02:44:09.639002 | orchestrator | Wednesday 14 May 2025 02:42:58 +0000 (0:00:00.869) 0:00:51.423 ********* 2025-05-14 02:44:09.639015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639078 | orchestrator | 2025-05-14 02:44:09.639085 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-14 02:44:09.639091 | orchestrator | Wednesday 14 May 2025 02:43:01 +0000 (0:00:03.057) 0:00:54.481 ********* 2025-05-14 02:44:09.639098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.639105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.639112 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:09.639119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.639138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.639146 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:09.639157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.639164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.639171 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:09.639178 | orchestrator | 2025-05-14 02:44:09.639186 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-14 02:44:09.639192 | orchestrator | Wednesday 14 May 2025 02:43:03 +0000 (0:00:01.815) 0:00:56.297 ********* 2025-05-14 02:44:09.639199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.639210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.639217 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:09.639230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc2025-05-14 02:44:09 | INFO  | Task 3695ba3a-6106-4d01-80d6-1aeecf8c5baa is in state SUCCESS 2025-05-14 02:44:09.639243 | orchestrator | /timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.639250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.639258 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:09.639265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.639273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.639280 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:09.639286 | orchestrator | 2025-05-14 02:44:09.639293 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-14 02:44:09.639301 | orchestrator | Wednesday 14 May 2025 02:43:04 +0000 (0:00:01.049) 0:00:57.346 ********* 2025-05-14 02:44:09.639315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639374 | orchestrator | 2025-05-14 02:44:09.639381 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-14 02:44:09.639388 | orchestrator | Wednesday 14 May 2025 02:43:06 +0000 (0:00:02.683) 0:01:00.030 ********* 2025-05-14 02:44:09.639395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639490 | orchestrator | 2025-05-14 02:44:09.639500 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-14 02:44:09.639507 | orchestrator | Wednesday 14 May 2025 02:43:15 +0000 (0:00:08.571) 0:01:08.601 ********* 2025-05-14 02:44:09.639514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.639521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.639532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.639550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.639557 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:09.639564 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:09.639571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:44:09.639577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:44:09.639583 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:09.639589 | orchestrator | 2025-05-14 02:44:09.639597 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-14 02:44:09.639604 | orchestrator | Wednesday 14 May 2025 02:43:17 +0000 (0:00:01.930) 0:01:10.532 ********* 2025-05-14 02:44:09.639611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:44:09.639646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:09.639673 | orchestrator | 2025-05-14 02:44:09.639680 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-14 02:44:09.639692 | orchestrator | Wednesday 14 May 2025 02:43:20 +0000 (0:00:03.241) 0:01:13.773 ********* 2025-05-14 02:44:09.639700 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:09.639707 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:09.639714 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:09.639721 | orchestrator | 2025-05-14 02:44:09.639728 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-14 02:44:09.639735 | orchestrator | Wednesday 14 May 2025 02:43:21 +0000 (0:00:00.605) 0:01:14.378 ********* 2025-05-14 02:44:09.639743 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:09.639750 | orchestrator | 2025-05-14 02:44:09.639758 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-14 02:44:09.639765 | orchestrator | Wednesday 14 May 2025 02:43:24 +0000 (0:00:02.899) 0:01:17.278 ********* 2025-05-14 02:44:09.639772 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:09.639779 | orchestrator | 2025-05-14 02:44:09.639786 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-14 02:44:09.639797 | orchestrator | Wednesday 14 May 2025 02:43:26 +0000 (0:00:02.462) 0:01:19.740 ********* 2025-05-14 02:44:09.639805 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:09.639812 | orchestrator | 2025-05-14 02:44:09.639818 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-14 02:44:09.639824 | orchestrator | Wednesday 14 May 2025 02:43:44 +0000 (0:00:17.461) 0:01:37.202 ********* 2025-05-14 02:44:09.639830 | orchestrator | 2025-05-14 02:44:09.639837 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-14 02:44:09.639843 | orchestrator | Wednesday 14 May 2025 02:43:44 +0000 (0:00:00.083) 0:01:37.285 ********* 2025-05-14 02:44:09.639850 | orchestrator | 2025-05-14 02:44:09.639856 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-14 02:44:09.639862 | orchestrator | Wednesday 14 May 2025 02:43:44 +0000 (0:00:00.257) 0:01:37.543 ********* 2025-05-14 02:44:09.639869 | orchestrator | 2025-05-14 02:44:09.639875 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-14 02:44:09.639882 | orchestrator | Wednesday 14 May 2025 02:43:44 +0000 (0:00:00.124) 0:01:37.668 ********* 2025-05-14 02:44:09.639888 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:09.639894 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:44:09.639900 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:44:09.639905 | orchestrator | 2025-05-14 02:44:09.639911 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-14 02:44:09.639916 | orchestrator | Wednesday 14 May 2025 02:43:59 +0000 (0:00:15.098) 0:01:52.766 ********* 2025-05-14 02:44:09.639923 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:09.639929 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:44:09.639934 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:44:09.639940 | orchestrator | 2025-05-14 02:44:09.639946 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:44:09.639952 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 02:44:09.639965 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:44:09.639973 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:44:09.639979 | orchestrator | 2025-05-14 02:44:09.639984 | orchestrator | 2025-05-14 02:44:09.639991 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:44:09.639997 | orchestrator | Wednesday 14 May 2025 02:44:06 +0000 (0:00:06.976) 0:01:59.743 ********* 2025-05-14 02:44:09.640003 | orchestrator | =============================================================================== 2025-05-14 02:44:09.640009 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.46s 2025-05-14 02:44:09.640016 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.10s 2025-05-14 02:44:09.640022 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 8.57s 2025-05-14 02:44:09.640028 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 6.98s 2025-05-14 02:44:09.640034 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.63s 2025-05-14 02:44:09.640040 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 5.47s 2025-05-14 02:44:09.640046 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.40s 2025-05-14 02:44:09.640052 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.14s 2025-05-14 02:44:09.640058 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.13s 2025-05-14 02:44:09.640065 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.94s 2025-05-14 02:44:09.640071 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.66s 2025-05-14 02:44:09.640076 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.46s 2025-05-14 02:44:09.640083 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.34s 2025-05-14 02:44:09.640089 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 3.25s 2025-05-14 02:44:09.640095 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.24s 2025-05-14 02:44:09.640101 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.06s 2025-05-14 02:44:09.640107 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.90s 2025-05-14 02:44:09.640113 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.68s 2025-05-14 02:44:09.640126 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.46s 2025-05-14 02:44:09.640132 | orchestrator | magnum : Copying over existing policy file ------------------------------ 1.93s 2025-05-14 02:44:09.640139 | orchestrator | 2025-05-14 02:44:09 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:09.640146 | orchestrator | 2025-05-14 02:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:12.679291 | orchestrator | 2025-05-14 02:44:12 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:12.681121 | orchestrator | 2025-05-14 02:44:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:12.682379 | orchestrator | 2025-05-14 02:44:12 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:12.685131 | orchestrator | 2025-05-14 02:44:12 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:12.688360 | orchestrator | 2025-05-14 02:44:12 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:12.688920 | orchestrator | 2025-05-14 02:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:15.729331 | orchestrator | 2025-05-14 02:44:15 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:15.729886 | orchestrator | 2025-05-14 02:44:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:15.731312 | orchestrator | 2025-05-14 02:44:15 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:15.732690 | orchestrator | 2025-05-14 02:44:15 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:15.734162 | orchestrator | 2025-05-14 02:44:15 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:15.734203 | orchestrator | 2025-05-14 02:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:18.780817 | orchestrator | 2025-05-14 02:44:18 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:18.780932 | orchestrator | 2025-05-14 02:44:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:18.780948 | orchestrator | 2025-05-14 02:44:18 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:18.781379 | orchestrator | 2025-05-14 02:44:18 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:18.783345 | orchestrator | 2025-05-14 02:44:18 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:18.783385 | orchestrator | 2025-05-14 02:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:21.809742 | orchestrator | 2025-05-14 02:44:21 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:21.809824 | orchestrator | 2025-05-14 02:44:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:21.809927 | orchestrator | 2025-05-14 02:44:21 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:21.812706 | orchestrator | 2025-05-14 02:44:21 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:21.812996 | orchestrator | 2025-05-14 02:44:21 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:21.813014 | orchestrator | 2025-05-14 02:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:24.844299 | orchestrator | 2025-05-14 02:44:24 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:24.844423 | orchestrator | 2025-05-14 02:44:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:24.845424 | orchestrator | 2025-05-14 02:44:24 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:24.845791 | orchestrator | 2025-05-14 02:44:24 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:24.847535 | orchestrator | 2025-05-14 02:44:24 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:24.847565 | orchestrator | 2025-05-14 02:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:27.878917 | orchestrator | 2025-05-14 02:44:27 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:27.879005 | orchestrator | 2025-05-14 02:44:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:27.880070 | orchestrator | 2025-05-14 02:44:27 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:27.880710 | orchestrator | 2025-05-14 02:44:27 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:27.882485 | orchestrator | 2025-05-14 02:44:27 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:27.882556 | orchestrator | 2025-05-14 02:44:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:30.930177 | orchestrator | 2025-05-14 02:44:30 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:30.931437 | orchestrator | 2025-05-14 02:44:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:30.932344 | orchestrator | 2025-05-14 02:44:30 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:30.933014 | orchestrator | 2025-05-14 02:44:30 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:30.934009 | orchestrator | 2025-05-14 02:44:30 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:30.934074 | orchestrator | 2025-05-14 02:44:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:33.971280 | orchestrator | 2025-05-14 02:44:33 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:33.971375 | orchestrator | 2025-05-14 02:44:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:33.971970 | orchestrator | 2025-05-14 02:44:33 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:33.973982 | orchestrator | 2025-05-14 02:44:33 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:33.974504 | orchestrator | 2025-05-14 02:44:33 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:33.974537 | orchestrator | 2025-05-14 02:44:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:37.002365 | orchestrator | 2025-05-14 02:44:37 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:37.003117 | orchestrator | 2025-05-14 02:44:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:37.005593 | orchestrator | 2025-05-14 02:44:37 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:37.007434 | orchestrator | 2025-05-14 02:44:37 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:37.007965 | orchestrator | 2025-05-14 02:44:37 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:37.007991 | orchestrator | 2025-05-14 02:44:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:40.058074 | orchestrator | 2025-05-14 02:44:40 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:40.059997 | orchestrator | 2025-05-14 02:44:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:40.060025 | orchestrator | 2025-05-14 02:44:40 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:40.060570 | orchestrator | 2025-05-14 02:44:40 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:40.061925 | orchestrator | 2025-05-14 02:44:40 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:40.063244 | orchestrator | 2025-05-14 02:44:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:43.112013 | orchestrator | 2025-05-14 02:44:43 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:43.113404 | orchestrator | 2025-05-14 02:44:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:43.115043 | orchestrator | 2025-05-14 02:44:43 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:43.116147 | orchestrator | 2025-05-14 02:44:43 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:43.119624 | orchestrator | 2025-05-14 02:44:43 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:43.119669 | orchestrator | 2025-05-14 02:44:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:46.162823 | orchestrator | 2025-05-14 02:44:46 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:46.163775 | orchestrator | 2025-05-14 02:44:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:46.165246 | orchestrator | 2025-05-14 02:44:46 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:46.166331 | orchestrator | 2025-05-14 02:44:46 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:46.166974 | orchestrator | 2025-05-14 02:44:46 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:46.166998 | orchestrator | 2025-05-14 02:44:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:49.205318 | orchestrator | 2025-05-14 02:44:49 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:49.205717 | orchestrator | 2025-05-14 02:44:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:49.212021 | orchestrator | 2025-05-14 02:44:49 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:49.213376 | orchestrator | 2025-05-14 02:44:49 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:49.214298 | orchestrator | 2025-05-14 02:44:49 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:49.214374 | orchestrator | 2025-05-14 02:44:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:52.260693 | orchestrator | 2025-05-14 02:44:52 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:52.260796 | orchestrator | 2025-05-14 02:44:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:52.261900 | orchestrator | 2025-05-14 02:44:52 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:52.263307 | orchestrator | 2025-05-14 02:44:52 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:52.264652 | orchestrator | 2025-05-14 02:44:52 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:52.264683 | orchestrator | 2025-05-14 02:44:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:55.301411 | orchestrator | 2025-05-14 02:44:55 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:55.301494 | orchestrator | 2025-05-14 02:44:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:55.301505 | orchestrator | 2025-05-14 02:44:55 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:55.302004 | orchestrator | 2025-05-14 02:44:55 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:55.302662 | orchestrator | 2025-05-14 02:44:55 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:55.302684 | orchestrator | 2025-05-14 02:44:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:58.360195 | orchestrator | 2025-05-14 02:44:58 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:44:58.360289 | orchestrator | 2025-05-14 02:44:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:44:58.360297 | orchestrator | 2025-05-14 02:44:58 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:44:58.360781 | orchestrator | 2025-05-14 02:44:58 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:44:58.366970 | orchestrator | 2025-05-14 02:44:58 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:44:58.367023 | orchestrator | 2025-05-14 02:44:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:01.415626 | orchestrator | 2025-05-14 02:45:01 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:01.415763 | orchestrator | 2025-05-14 02:45:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:01.417062 | orchestrator | 2025-05-14 02:45:01 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:01.418272 | orchestrator | 2025-05-14 02:45:01 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:01.419953 | orchestrator | 2025-05-14 02:45:01 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:45:01.419997 | orchestrator | 2025-05-14 02:45:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:04.458726 | orchestrator | 2025-05-14 02:45:04 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:04.459016 | orchestrator | 2025-05-14 02:45:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:04.459788 | orchestrator | 2025-05-14 02:45:04 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:04.460790 | orchestrator | 2025-05-14 02:45:04 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:04.461592 | orchestrator | 2025-05-14 02:45:04 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:45:04.461624 | orchestrator | 2025-05-14 02:45:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:07.516399 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:07.516716 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:07.517360 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:07.518095 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:07.526141 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state STARTED 2025-05-14 02:45:07.526219 | orchestrator | 2025-05-14 02:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:10.567815 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:10.567915 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:10.568327 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:10.568804 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:10.569303 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:10.575782 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task 2f94e58c-ebcc-404c-b88a-e7b392af0843 is in state SUCCESS 2025-05-14 02:45:10.576453 | orchestrator | 2025-05-14 02:45:10.577989 | orchestrator | 2025-05-14 02:45:10.578093 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:45:10.578110 | orchestrator | 2025-05-14 02:45:10.578122 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:45:10.578134 | orchestrator | Wednesday 14 May 2025 02:39:57 +0000 (0:00:00.399) 0:00:00.399 ********* 2025-05-14 02:45:10.578145 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:45:10.578157 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:45:10.578168 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:45:10.578179 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:45:10.578190 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:45:10.578200 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:45:10.578211 | orchestrator | 2025-05-14 02:45:10.578223 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:45:10.578452 | orchestrator | Wednesday 14 May 2025 02:39:58 +0000 (0:00:00.949) 0:00:01.349 ********* 2025-05-14 02:45:10.578465 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-14 02:45:10.578477 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-14 02:45:10.578487 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-14 02:45:10.578498 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-14 02:45:10.578509 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-14 02:45:10.578519 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-14 02:45:10.578530 | orchestrator | 2025-05-14 02:45:10.578541 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-14 02:45:10.578551 | orchestrator | 2025-05-14 02:45:10.578563 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 02:45:10.578574 | orchestrator | Wednesday 14 May 2025 02:39:59 +0000 (0:00:00.910) 0:00:02.260 ********* 2025-05-14 02:45:10.578585 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:45:10.578598 | orchestrator | 2025-05-14 02:45:10.578609 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-14 02:45:10.578620 | orchestrator | Wednesday 14 May 2025 02:40:00 +0000 (0:00:01.143) 0:00:03.403 ********* 2025-05-14 02:45:10.578630 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:45:10.578641 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:45:10.578652 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:45:10.578663 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:45:10.578673 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:45:10.578684 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:45:10.578695 | orchestrator | 2025-05-14 02:45:10.578705 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-14 02:45:10.578716 | orchestrator | Wednesday 14 May 2025 02:40:01 +0000 (0:00:01.137) 0:00:04.540 ********* 2025-05-14 02:45:10.578727 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:45:10.578738 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:45:10.578748 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:45:10.578759 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:45:10.578769 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:45:10.578780 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:45:10.578813 | orchestrator | 2025-05-14 02:45:10.578861 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-14 02:45:10.578875 | orchestrator | Wednesday 14 May 2025 02:40:02 +0000 (0:00:00.989) 0:00:05.530 ********* 2025-05-14 02:45:10.578886 | orchestrator | ok: [testbed-node-0] => { 2025-05-14 02:45:10.578937 | orchestrator |  "changed": false, 2025-05-14 02:45:10.578949 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:10.578961 | orchestrator | } 2025-05-14 02:45:10.578972 | orchestrator | ok: [testbed-node-1] => { 2025-05-14 02:45:10.579004 | orchestrator |  "changed": false, 2025-05-14 02:45:10.579015 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:10.579026 | orchestrator | } 2025-05-14 02:45:10.579039 | orchestrator | ok: [testbed-node-2] => { 2025-05-14 02:45:10.579181 | orchestrator |  "changed": false, 2025-05-14 02:45:10.579206 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:10.579223 | orchestrator | } 2025-05-14 02:45:10.579240 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:45:10.579256 | orchestrator |  "changed": false, 2025-05-14 02:45:10.579273 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:10.579314 | orchestrator | } 2025-05-14 02:45:10.579333 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:45:10.579350 | orchestrator |  "changed": false, 2025-05-14 02:45:10.579368 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:10.579385 | orchestrator | } 2025-05-14 02:45:10.579404 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:45:10.579451 | orchestrator |  "changed": false, 2025-05-14 02:45:10.579471 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:10.579489 | orchestrator | } 2025-05-14 02:45:10.579508 | orchestrator | 2025-05-14 02:45:10.579528 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-14 02:45:10.579548 | orchestrator | Wednesday 14 May 2025 02:40:03 +0000 (0:00:00.557) 0:00:06.088 ********* 2025-05-14 02:45:10.579565 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.579584 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.579598 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.579609 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.579620 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.579630 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.579641 | orchestrator | 2025-05-14 02:45:10.579652 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-14 02:45:10.579663 | orchestrator | Wednesday 14 May 2025 02:40:03 +0000 (0:00:00.723) 0:00:06.811 ********* 2025-05-14 02:45:10.579673 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-14 02:45:10.579684 | orchestrator | 2025-05-14 02:45:10.579695 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-14 02:45:10.579706 | orchestrator | Wednesday 14 May 2025 02:40:07 +0000 (0:00:03.740) 0:00:10.552 ********* 2025-05-14 02:45:10.579717 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-14 02:45:10.579730 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-14 02:45:10.579740 | orchestrator | 2025-05-14 02:45:10.579771 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-14 02:45:10.579783 | orchestrator | Wednesday 14 May 2025 02:40:14 +0000 (0:00:07.208) 0:00:17.760 ********* 2025-05-14 02:45:10.579795 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:45:10.579805 | orchestrator | 2025-05-14 02:45:10.579816 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-14 02:45:10.579827 | orchestrator | Wednesday 14 May 2025 02:40:18 +0000 (0:00:03.551) 0:00:21.311 ********* 2025-05-14 02:45:10.579838 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:45:10.579849 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-14 02:45:10.579860 | orchestrator | 2025-05-14 02:45:10.579871 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-14 02:45:10.579883 | orchestrator | Wednesday 14 May 2025 02:40:22 +0000 (0:00:04.071) 0:00:25.383 ********* 2025-05-14 02:45:10.579898 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:45:10.579917 | orchestrator | 2025-05-14 02:45:10.579934 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-14 02:45:10.579952 | orchestrator | Wednesday 14 May 2025 02:40:25 +0000 (0:00:03.459) 0:00:28.843 ********* 2025-05-14 02:45:10.579971 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-14 02:45:10.580008 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-14 02:45:10.580026 | orchestrator | 2025-05-14 02:45:10.580041 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 02:45:10.580052 | orchestrator | Wednesday 14 May 2025 02:40:34 +0000 (0:00:08.488) 0:00:37.331 ********* 2025-05-14 02:45:10.580063 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.580074 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.580085 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.580096 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.580107 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.580118 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.580129 | orchestrator | 2025-05-14 02:45:10.580140 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-14 02:45:10.580152 | orchestrator | Wednesday 14 May 2025 02:40:34 +0000 (0:00:00.720) 0:00:38.051 ********* 2025-05-14 02:45:10.580163 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.580174 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.580185 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.580196 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.580207 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.580217 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.580229 | orchestrator | 2025-05-14 02:45:10.580239 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-14 02:45:10.580251 | orchestrator | Wednesday 14 May 2025 02:40:38 +0000 (0:00:03.038) 0:00:41.090 ********* 2025-05-14 02:45:10.580262 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:45:10.580273 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:45:10.580284 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:45:10.580295 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:45:10.580305 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:45:10.580316 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:45:10.580327 | orchestrator | 2025-05-14 02:45:10.580347 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-14 02:45:10.580359 | orchestrator | Wednesday 14 May 2025 02:40:39 +0000 (0:00:01.715) 0:00:42.805 ********* 2025-05-14 02:45:10.580370 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.580380 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.580391 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.580402 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.580413 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.580460 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.580473 | orchestrator | 2025-05-14 02:45:10.580485 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-14 02:45:10.580496 | orchestrator | Wednesday 14 May 2025 02:40:41 +0000 (0:00:02.214) 0:00:45.019 ********* 2025-05-14 02:45:10.580511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.580540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.580606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.580647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.580660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.580690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.580713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.580780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.580829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.580840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.580868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.580924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.580937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.580949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.580965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.580977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.581016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.581043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.581063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.581093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.581115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.581138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.584728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.584838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.584851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.584872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.584960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.584989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.585066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.585084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.585110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.585129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.585190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.585225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.585260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.585270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.585288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.585332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.585416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.585531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.585566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.585592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.585644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.585664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.585693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.585749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.585775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.585816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.585842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.585886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.585908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.585917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.585943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.585960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.585969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.585983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.585992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.586009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.586082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.586095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.586109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.586118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.586127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.586148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.586158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.586166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.586174 | orchestrator | 2025-05-14 02:45:10.586183 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-14 02:45:10.586192 | orchestrator | Wednesday 14 May 2025 02:40:45 +0000 (0:00:03.276) 0:00:48.296 ********* 2025-05-14 02:45:10.586200 | orchestrator | [WARNING]: Skipped 2025-05-14 02:45:10.586209 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-14 02:45:10.586217 | orchestrator | due to this access issue: 2025-05-14 02:45:10.586248 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-14 02:45:10.586256 | orchestrator | a directory 2025-05-14 02:45:10.586265 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:45:10.586273 | orchestrator | 2025-05-14 02:45:10.586281 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 02:45:10.586289 | orchestrator | Wednesday 14 May 2025 02:40:46 +0000 (0:00:01.172) 0:00:49.468 ********* 2025-05-14 02:45:10.586297 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:45:10.586312 | orchestrator | 2025-05-14 02:45:10.586320 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-14 02:45:10.586328 | orchestrator | Wednesday 14 May 2025 02:40:48 +0000 (0:00:02.405) 0:00:51.874 ********* 2025-05-14 02:45:10.586337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.586350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.586359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.586368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.586381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.586395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.586403 | orchestrator | 2025-05-14 02:45:10.586412 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-14 02:45:10.586445 | orchestrator | Wednesday 14 May 2025 02:40:55 +0000 (0:00:06.955) 0:00:58.830 ********* 2025-05-14 02:45:10.586458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.586466 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.586475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.586483 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.586498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.586512 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.586520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.586528 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.586537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.586545 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.586557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.586565 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.586573 | orchestrator | 2025-05-14 02:45:10.586581 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-14 02:45:10.586589 | orchestrator | Wednesday 14 May 2025 02:41:00 +0000 (0:00:05.099) 0:01:03.929 ********* 2025-05-14 02:45:10.586597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.586610 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.586636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.586646 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.586654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.586662 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.586675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.586683 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.586692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.586700 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.586708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.586721 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.586730 | orchestrator | 2025-05-14 02:45:10.586748 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-14 02:45:10.586756 | orchestrator | Wednesday 14 May 2025 02:41:07 +0000 (0:00:06.835) 0:01:10.765 ********* 2025-05-14 02:45:10.586765 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.586773 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.586781 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.586789 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.586797 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.586805 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.586812 | orchestrator | 2025-05-14 02:45:10.586820 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-14 02:45:10.586829 | orchestrator | Wednesday 14 May 2025 02:41:12 +0000 (0:00:04.362) 0:01:15.127 ********* 2025-05-14 02:45:10.586837 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.586845 | orchestrator | 2025-05-14 02:45:10.586853 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-14 02:45:10.586861 | orchestrator | Wednesday 14 May 2025 02:41:12 +0000 (0:00:00.164) 0:01:15.292 ********* 2025-05-14 02:45:10.586869 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.586877 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.586885 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.586893 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.586900 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.586908 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.586916 | orchestrator | 2025-05-14 02:45:10.586924 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-14 02:45:10.586932 | orchestrator | Wednesday 14 May 2025 02:41:13 +0000 (0:00:00.889) 0:01:16.182 ********* 2025-05-14 02:45:10.586941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.586954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.586968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.586981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.586991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.587000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.587088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.587106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.587186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.587200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587209 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.587217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.587230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.587276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.587332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.587471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.587508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.587521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587530 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.587539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.587547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.587595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.587646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.587668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.587703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.587711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587719 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.587733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.587742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.587790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.587845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.587867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.587875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.587905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.587913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587921 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.587934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.587942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.587985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.587998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.588007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.588021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.588041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.588058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.588071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.588097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.588106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588114 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.588122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.588135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.588178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.588763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.588798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.588824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.588842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.588857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.588883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.588899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.588908 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.588916 | orchestrator | 2025-05-14 02:45:10.588925 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-14 02:45:10.588933 | orchestrator | Wednesday 14 May 2025 02:41:17 +0000 (0:00:04.284) 0:01:20.466 ********* 2025-05-14 02:45:10.588941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.588955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.589061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.589150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.589168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.589203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.589210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.589228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.589293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.589301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.589333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.589437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.589452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.589619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.589630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.589663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.589712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.589727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.589751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.589758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.589826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.589870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.589888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.589919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.589926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.589945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.589964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.589974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.589982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.594147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.594223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.594265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.594287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.594298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.594321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.594369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.594380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.594396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.594418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.594477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.594487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594498 | orchestrator | 2025-05-14 02:45:10.594510 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-14 02:45:10.594520 | orchestrator | Wednesday 14 May 2025 02:41:22 +0000 (0:00:05.273) 0:01:25.740 ********* 2025-05-14 02:45:10.594538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.594559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.594614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.594651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.594663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.594691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.594754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.594778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.594790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.594826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.594886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.594909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.594924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.594945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.594991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.595005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.595027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.595052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.595074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.595100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.595111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.595136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.595147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.595203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.595220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.595243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.595257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.595313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.595339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.595350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.595383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.595405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.595416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.595477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.595494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.595518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.595532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.595566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.595616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.595627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.595638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.597357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.597415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.597520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.597532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.597567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.597577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.597591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.597636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.597647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.597680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.597690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.597718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.597740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.597797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.597825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.597834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597843 | orchestrator | 2025-05-14 02:45:10.597853 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-14 02:45:10.597863 | orchestrator | Wednesday 14 May 2025 02:41:32 +0000 (0:00:09.874) 0:01:35.615 ********* 2025-05-14 02:45:10.597873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.597895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.597943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.597972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.597982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.597992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.598007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.598059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.598089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.598100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.598135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.598156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598175 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.598185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.598222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.598249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.598263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.598284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.598324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.598341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.598367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.598378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598407 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.598435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.598454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.598503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.598529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.598542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.598560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.598583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.598598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.598646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.598711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598724 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.598752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.598769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.598824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.598850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.598860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.598884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.598915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.598931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.598941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.598955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.598966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.598976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.599073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.599105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.599204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.599247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.599257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.599327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.599340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.599381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.599393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.599454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.599467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599476 | orchestrator | 2025-05-14 02:45:10.599485 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-14 02:45:10.599495 | orchestrator | Wednesday 14 May 2025 02:41:36 +0000 (0:00:03.944) 0:01:39.560 ********* 2025-05-14 02:45:10.599504 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.599515 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.599524 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:45:10.599545 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:10.599558 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.599564 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:45:10.599570 | orchestrator | 2025-05-14 02:45:10.599579 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-14 02:45:10.599588 | orchestrator | Wednesday 14 May 2025 02:41:42 +0000 (0:00:05.705) 0:01:45.265 ********* 2025-05-14 02:45:10.599597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.599632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.599697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.599735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.599752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_i2025-05-14 02:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:10.599776 | orchestrator | n_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.599782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.599790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599804 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.599810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.599816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.599850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.599890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.599909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.599914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.599932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.599938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599947 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.599955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.599962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.599983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.599994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.600034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.600054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.600077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.600083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600120 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.600130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.600136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.600174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.600210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.600225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.600255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.600261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.600289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.600327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.600370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.600386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.600394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.600446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.600457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.600526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.600574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.600603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.600612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.600632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.600638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.600648 | orchestrator | 2025-05-14 02:45:10.600653 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-14 02:45:10.600669 | orchestrator | Wednesday 14 May 2025 02:41:46 +0000 (0:00:04.020) 0:01:49.286 ********* 2025-05-14 02:45:10.600675 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.600681 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.600687 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.600692 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.600697 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.600703 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.600708 | orchestrator | 2025-05-14 02:45:10.600722 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-14 02:45:10.600728 | orchestrator | Wednesday 14 May 2025 02:41:48 +0000 (0:00:02.702) 0:01:51.988 ********* 2025-05-14 02:45:10.600733 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.600739 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.600744 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.600749 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.600754 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.600760 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.600765 | orchestrator | 2025-05-14 02:45:10.600771 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-14 02:45:10.600776 | orchestrator | Wednesday 14 May 2025 02:41:50 +0000 (0:00:01.992) 0:01:53.981 ********* 2025-05-14 02:45:10.600782 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.600787 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.600793 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.600798 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.600803 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.600809 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.600814 | orchestrator | 2025-05-14 02:45:10.600820 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-14 02:45:10.600825 | orchestrator | Wednesday 14 May 2025 02:41:53 +0000 (0:00:02.612) 0:01:56.593 ********* 2025-05-14 02:45:10.600831 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.600843 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.600849 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.600854 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.600859 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.600865 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.600870 | orchestrator | 2025-05-14 02:45:10.600876 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-14 02:45:10.600881 | orchestrator | Wednesday 14 May 2025 02:41:55 +0000 (0:00:02.196) 0:01:58.790 ********* 2025-05-14 02:45:10.600887 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.600892 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.600897 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.600903 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.600908 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.600913 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.600919 | orchestrator | 2025-05-14 02:45:10.600924 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-14 02:45:10.600933 | orchestrator | Wednesday 14 May 2025 02:41:57 +0000 (0:00:02.069) 0:02:00.859 ********* 2025-05-14 02:45:10.600939 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.600945 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.600950 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.600958 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.600963 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.600968 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.600974 | orchestrator | 2025-05-14 02:45:10.600979 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-14 02:45:10.600985 | orchestrator | Wednesday 14 May 2025 02:42:00 +0000 (0:00:02.859) 0:02:03.719 ********* 2025-05-14 02:45:10.600990 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:10.600997 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.601011 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:10.601016 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.601022 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:10.601027 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.601033 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:10.601038 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.601044 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:10.601049 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.601055 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:10.601060 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.601066 | orchestrator | 2025-05-14 02:45:10.601071 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-14 02:45:10.601077 | orchestrator | Wednesday 14 May 2025 02:42:03 +0000 (0:00:02.740) 0:02:06.459 ********* 2025-05-14 02:45:10.601094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.601101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.601130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.601170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.601181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.601216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.601222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601228 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.601234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.601243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.601274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.601320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.601332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.601357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.601365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601371 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.601377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.601392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.601466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.601500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.601520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.601570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.601603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.601641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.601673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.601742 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.601747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.601772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.601793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.601814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.601830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.601859 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.601864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.601937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.601949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.601957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.601979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.601985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.601990 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.601996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.602005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.602081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.602117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.602137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.602160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.602170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602175 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.602180 | orchestrator | 2025-05-14 02:45:10.602185 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-14 02:45:10.602190 | orchestrator | Wednesday 14 May 2025 02:42:06 +0000 (0:00:03.230) 0:02:09.689 ********* 2025-05-14 02:45:10.602195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.602203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.602246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.602283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.602294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.602316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.602331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602336 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.602341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.602347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.602383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.602388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.602449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.602475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.602501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.602518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.602528 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.602538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.602609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.602642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.602693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.602702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602711 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.602724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.602733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.602777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.602846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.602861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.602884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.602912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.602921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602930 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.602937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.602958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.602995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.603004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.603042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.603052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.603082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.603099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.603119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.603146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.603159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603168 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.603176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.603211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.603258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.603283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.603304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.603323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.603359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.603368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.603407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.603417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.603485 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.603494 | orchestrator | 2025-05-14 02:45:10.603507 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-14 02:45:10.603516 | orchestrator | Wednesday 14 May 2025 02:42:10 +0000 (0:00:03.734) 0:02:13.424 ********* 2025-05-14 02:45:10.603524 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.603532 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.603539 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.603548 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.603556 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.603564 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.603571 | orchestrator | 2025-05-14 02:45:10.603579 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-14 02:45:10.603587 | orchestrator | Wednesday 14 May 2025 02:42:12 +0000 (0:00:02.149) 0:02:15.574 ********* 2025-05-14 02:45:10.603594 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.603601 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.603618 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.603626 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:45:10.603635 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:45:10.603642 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:45:10.603651 | orchestrator | 2025-05-14 02:45:10.603659 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-14 02:45:10.603667 | orchestrator | Wednesday 14 May 2025 02:42:17 +0000 (0:00:05.377) 0:02:20.951 ********* 2025-05-14 02:45:10.603675 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.603683 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.603691 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.603698 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.603706 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.603714 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.603721 | orchestrator | 2025-05-14 02:45:10.603730 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-14 02:45:10.603739 | orchestrator | Wednesday 14 May 2025 02:42:19 +0000 (0:00:02.046) 0:02:22.998 ********* 2025-05-14 02:45:10.603747 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.603755 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.603764 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.603771 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.603780 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.603788 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.603796 | orchestrator | 2025-05-14 02:45:10.603805 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-14 02:45:10.603813 | orchestrator | Wednesday 14 May 2025 02:42:22 +0000 (0:00:02.815) 0:02:25.813 ********* 2025-05-14 02:45:10.603822 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.603830 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.603839 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.603847 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.603855 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.603864 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.603873 | orchestrator | 2025-05-14 02:45:10.603882 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-14 02:45:10.603911 | orchestrator | Wednesday 14 May 2025 02:42:25 +0000 (0:00:03.001) 0:02:28.814 ********* 2025-05-14 02:45:10.603921 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.603930 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.603939 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.603947 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.603955 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.603964 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.603972 | orchestrator | 2025-05-14 02:45:10.603981 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-14 02:45:10.603989 | orchestrator | Wednesday 14 May 2025 02:42:28 +0000 (0:00:02.774) 0:02:31.588 ********* 2025-05-14 02:45:10.603998 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.604006 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.604013 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.604022 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.604030 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.604038 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.604046 | orchestrator | 2025-05-14 02:45:10.604054 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-14 02:45:10.604063 | orchestrator | Wednesday 14 May 2025 02:42:31 +0000 (0:00:03.098) 0:02:34.687 ********* 2025-05-14 02:45:10.604071 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.604079 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.604087 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.604096 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.604111 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.604120 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.604128 | orchestrator | 2025-05-14 02:45:10.604135 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-14 02:45:10.604144 | orchestrator | Wednesday 14 May 2025 02:42:38 +0000 (0:00:06.607) 0:02:41.295 ********* 2025-05-14 02:45:10.604153 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.604160 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.604168 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.604176 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.604183 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.604191 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.604199 | orchestrator | 2025-05-14 02:45:10.604208 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-14 02:45:10.604216 | orchestrator | Wednesday 14 May 2025 02:42:40 +0000 (0:00:02.277) 0:02:43.572 ********* 2025-05-14 02:45:10.604224 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.604233 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.604242 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.604250 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.604259 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.604268 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.604276 | orchestrator | 2025-05-14 02:45:10.604285 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-14 02:45:10.604293 | orchestrator | Wednesday 14 May 2025 02:42:44 +0000 (0:00:03.900) 0:02:47.473 ********* 2025-05-14 02:45:10.604307 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:10.604316 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.604324 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:10.604332 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.604340 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:10.604349 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.604357 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:10.604366 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.604375 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:10.604384 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.604392 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:10.604401 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.604409 | orchestrator | 2025-05-14 02:45:10.604418 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-14 02:45:10.604446 | orchestrator | Wednesday 14 May 2025 02:42:47 +0000 (0:00:03.240) 0:02:50.713 ********* 2025-05-14 02:45:10.604456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.604486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.604531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.604570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.604579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.604598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.604615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.604641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.604661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.604673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604682 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.604691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.604719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.604761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.604788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.604810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.604829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.604850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.604865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.604897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.604906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604918 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.604927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.604942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.604981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.604993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.605054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.605074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.605118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.605127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605135 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.605148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.605165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.605216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.605279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.605301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.605341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.605350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605358 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.605367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.605386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.605450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.605511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.605527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.605560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.605579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605587 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.605628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.605648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.605694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.605762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.605780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.605833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.605855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605865 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.605875 | orchestrator | 2025-05-14 02:45:10.605884 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-14 02:45:10.605894 | orchestrator | Wednesday 14 May 2025 02:42:52 +0000 (0:00:04.852) 0:02:55.565 ********* 2025-05-14 02:45:10.605902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.605920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.605965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.605978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.605987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.606059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.606084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.606128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.606206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.606215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.606259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.606317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.606331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.606345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.606374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.606408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.606416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.606459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:10.606493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.606522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.606584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:10.606634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.606677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.606687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:10.606721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.606729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.606769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.606807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.606841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.606864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.606872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.606907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.606924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.606931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.606955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.606968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:10.606984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.606993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:10.607012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:10.607021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.607034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:10.607043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:10.607050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:10.607058 | orchestrator | 2025-05-14 02:45:10.607067 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 02:45:10.607075 | orchestrator | Wednesday 14 May 2025 02:42:58 +0000 (0:00:05.593) 0:03:01.158 ********* 2025-05-14 02:45:10.607084 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:10.607092 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:10.607099 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:10.607111 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:10.607119 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:10.607127 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:10.607135 | orchestrator | 2025-05-14 02:45:10.607143 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-14 02:45:10.607150 | orchestrator | Wednesday 14 May 2025 02:42:59 +0000 (0:00:00.944) 0:03:02.103 ********* 2025-05-14 02:45:10.607158 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:10.607166 | orchestrator | 2025-05-14 02:45:10.607177 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-14 02:45:10.607186 | orchestrator | Wednesday 14 May 2025 02:43:02 +0000 (0:00:03.346) 0:03:05.450 ********* 2025-05-14 02:45:10.607194 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:10.607201 | orchestrator | 2025-05-14 02:45:10.607208 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-14 02:45:10.607216 | orchestrator | Wednesday 14 May 2025 02:43:05 +0000 (0:00:02.799) 0:03:08.249 ********* 2025-05-14 02:45:10.607223 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:10.607230 | orchestrator | 2025-05-14 02:45:10.607238 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:10.607245 | orchestrator | Wednesday 14 May 2025 02:43:48 +0000 (0:00:43.635) 0:03:51.885 ********* 2025-05-14 02:45:10.607253 | orchestrator | 2025-05-14 02:45:10.607260 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:10.607267 | orchestrator | Wednesday 14 May 2025 02:43:48 +0000 (0:00:00.052) 0:03:51.938 ********* 2025-05-14 02:45:10.607275 | orchestrator | 2025-05-14 02:45:10.607282 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:10.607290 | orchestrator | Wednesday 14 May 2025 02:43:49 +0000 (0:00:00.208) 0:03:52.147 ********* 2025-05-14 02:45:10.607298 | orchestrator | 2025-05-14 02:45:10.607305 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:10.607313 | orchestrator | Wednesday 14 May 2025 02:43:49 +0000 (0:00:00.063) 0:03:52.211 ********* 2025-05-14 02:45:10.607321 | orchestrator | 2025-05-14 02:45:10.607329 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:10.607337 | orchestrator | Wednesday 14 May 2025 02:43:49 +0000 (0:00:00.063) 0:03:52.274 ********* 2025-05-14 02:45:10.607345 | orchestrator | 2025-05-14 02:45:10.607353 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:10.607361 | orchestrator | Wednesday 14 May 2025 02:43:49 +0000 (0:00:00.050) 0:03:52.325 ********* 2025-05-14 02:45:10.607369 | orchestrator | 2025-05-14 02:45:10.607376 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-14 02:45:10.607384 | orchestrator | Wednesday 14 May 2025 02:43:49 +0000 (0:00:00.255) 0:03:52.581 ********* 2025-05-14 02:45:10.607391 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:10.607400 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:45:10.607407 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:45:10.607416 | orchestrator | 2025-05-14 02:45:10.607487 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-14 02:45:10.607495 | orchestrator | Wednesday 14 May 2025 02:44:18 +0000 (0:00:28.780) 0:04:21.361 ********* 2025-05-14 02:45:10.607502 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:45:10.607510 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:45:10.607517 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:45:10.607525 | orchestrator | 2025-05-14 02:45:10.607542 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:45:10.607551 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 02:45:10.607560 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-14 02:45:10.607576 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-14 02:45:10.607583 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-14 02:45:10.607590 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-14 02:45:10.607597 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-14 02:45:10.607604 | orchestrator | 2025-05-14 02:45:10.607611 | orchestrator | 2025-05-14 02:45:10.607618 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:45:10.607626 | orchestrator | Wednesday 14 May 2025 02:45:07 +0000 (0:00:49.443) 0:05:10.804 ********* 2025-05-14 02:45:10.607633 | orchestrator | =============================================================================== 2025-05-14 02:45:10.607640 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 49.45s 2025-05-14 02:45:10.607647 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.64s 2025-05-14 02:45:10.607654 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.78s 2025-05-14 02:45:10.607662 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 9.87s 2025-05-14 02:45:10.607669 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.49s 2025-05-14 02:45:10.607676 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.21s 2025-05-14 02:45:10.607683 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 6.96s 2025-05-14 02:45:10.607690 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 6.84s 2025-05-14 02:45:10.607696 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 6.61s 2025-05-14 02:45:10.607703 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.71s 2025-05-14 02:45:10.607711 | orchestrator | neutron : Check neutron containers -------------------------------------- 5.59s 2025-05-14 02:45:10.607726 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.38s 2025-05-14 02:45:10.607733 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.27s 2025-05-14 02:45:10.607741 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.10s 2025-05-14 02:45:10.607748 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 4.85s 2025-05-14 02:45:10.607755 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.36s 2025-05-14 02:45:10.607761 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.29s 2025-05-14 02:45:10.607768 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.07s 2025-05-14 02:45:10.607774 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.02s 2025-05-14 02:45:10.607781 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.94s 2025-05-14 02:45:13.613181 | orchestrator | 2025-05-14 02:45:13 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:13.613280 | orchestrator | 2025-05-14 02:45:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:13.613859 | orchestrator | 2025-05-14 02:45:13 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:13.614348 | orchestrator | 2025-05-14 02:45:13 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:13.614792 | orchestrator | 2025-05-14 02:45:13 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:13.614826 | orchestrator | 2025-05-14 02:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:16.641804 | orchestrator | 2025-05-14 02:45:16 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:16.643928 | orchestrator | 2025-05-14 02:45:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:16.644263 | orchestrator | 2025-05-14 02:45:16 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:16.644901 | orchestrator | 2025-05-14 02:45:16 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:16.645200 | orchestrator | 2025-05-14 02:45:16 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:16.645223 | orchestrator | 2025-05-14 02:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:19.673234 | orchestrator | 2025-05-14 02:45:19 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:19.673436 | orchestrator | 2025-05-14 02:45:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:19.673925 | orchestrator | 2025-05-14 02:45:19 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:19.674478 | orchestrator | 2025-05-14 02:45:19 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:19.675045 | orchestrator | 2025-05-14 02:45:19 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:19.675151 | orchestrator | 2025-05-14 02:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:22.714932 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:22.715036 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:22.715564 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:22.717347 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:22.724789 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:22.724847 | orchestrator | 2025-05-14 02:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:25.758902 | orchestrator | 2025-05-14 02:45:25 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:25.759070 | orchestrator | 2025-05-14 02:45:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:25.760154 | orchestrator | 2025-05-14 02:45:25 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:25.760844 | orchestrator | 2025-05-14 02:45:25 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:25.761696 | orchestrator | 2025-05-14 02:45:25 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:25.761723 | orchestrator | 2025-05-14 02:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:28.796005 | orchestrator | 2025-05-14 02:45:28 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:28.797347 | orchestrator | 2025-05-14 02:45:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:28.801112 | orchestrator | 2025-05-14 02:45:28 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:28.801583 | orchestrator | 2025-05-14 02:45:28 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:28.802655 | orchestrator | 2025-05-14 02:45:28 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:28.802685 | orchestrator | 2025-05-14 02:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:31.834683 | orchestrator | 2025-05-14 02:45:31 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:31.834795 | orchestrator | 2025-05-14 02:45:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:31.834993 | orchestrator | 2025-05-14 02:45:31 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:31.835614 | orchestrator | 2025-05-14 02:45:31 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:31.835913 | orchestrator | 2025-05-14 02:45:31 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:31.835933 | orchestrator | 2025-05-14 02:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:34.858827 | orchestrator | 2025-05-14 02:45:34 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:34.858903 | orchestrator | 2025-05-14 02:45:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:34.859086 | orchestrator | 2025-05-14 02:45:34 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:34.859536 | orchestrator | 2025-05-14 02:45:34 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:34.859972 | orchestrator | 2025-05-14 02:45:34 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:34.859985 | orchestrator | 2025-05-14 02:45:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:37.900302 | orchestrator | 2025-05-14 02:45:37 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:37.900656 | orchestrator | 2025-05-14 02:45:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:37.901190 | orchestrator | 2025-05-14 02:45:37 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:37.902439 | orchestrator | 2025-05-14 02:45:37 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:37.903289 | orchestrator | 2025-05-14 02:45:37 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:37.903319 | orchestrator | 2025-05-14 02:45:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:40.933284 | orchestrator | 2025-05-14 02:45:40 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:40.933463 | orchestrator | 2025-05-14 02:45:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:40.934192 | orchestrator | 2025-05-14 02:45:40 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:40.934721 | orchestrator | 2025-05-14 02:45:40 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:40.935346 | orchestrator | 2025-05-14 02:45:40 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:40.935366 | orchestrator | 2025-05-14 02:45:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:43.973743 | orchestrator | 2025-05-14 02:45:43 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:43.973848 | orchestrator | 2025-05-14 02:45:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:43.974665 | orchestrator | 2025-05-14 02:45:43 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:43.974916 | orchestrator | 2025-05-14 02:45:43 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:43.976585 | orchestrator | 2025-05-14 02:45:43 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:43.976625 | orchestrator | 2025-05-14 02:45:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:47.000778 | orchestrator | 2025-05-14 02:45:46 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:47.002380 | orchestrator | 2025-05-14 02:45:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:47.004419 | orchestrator | 2025-05-14 02:45:47 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:47.004469 | orchestrator | 2025-05-14 02:45:47 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:47.006493 | orchestrator | 2025-05-14 02:45:47 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:47.006552 | orchestrator | 2025-05-14 02:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:50.060795 | orchestrator | 2025-05-14 02:45:50 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:50.061448 | orchestrator | 2025-05-14 02:45:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:50.061482 | orchestrator | 2025-05-14 02:45:50 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:50.061975 | orchestrator | 2025-05-14 02:45:50 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:50.062427 | orchestrator | 2025-05-14 02:45:50 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:50.062449 | orchestrator | 2025-05-14 02:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:53.103302 | orchestrator | 2025-05-14 02:45:53 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:53.104077 | orchestrator | 2025-05-14 02:45:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:53.106188 | orchestrator | 2025-05-14 02:45:53 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:53.107772 | orchestrator | 2025-05-14 02:45:53 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:53.109002 | orchestrator | 2025-05-14 02:45:53 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:53.109027 | orchestrator | 2025-05-14 02:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:56.143321 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:56.143684 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:56.144256 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:56.145005 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:56.145425 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:56.145448 | orchestrator | 2025-05-14 02:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:59.181885 | orchestrator | 2025-05-14 02:45:59 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:45:59.182093 | orchestrator | 2025-05-14 02:45:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:45:59.182676 | orchestrator | 2025-05-14 02:45:59 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:45:59.183092 | orchestrator | 2025-05-14 02:45:59 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:45:59.183593 | orchestrator | 2025-05-14 02:45:59 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:45:59.183625 | orchestrator | 2025-05-14 02:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:02.208779 | orchestrator | 2025-05-14 02:46:02 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:02.210512 | orchestrator | 2025-05-14 02:46:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:02.211694 | orchestrator | 2025-05-14 02:46:02 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:02.212281 | orchestrator | 2025-05-14 02:46:02 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:02.213244 | orchestrator | 2025-05-14 02:46:02 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:02.213281 | orchestrator | 2025-05-14 02:46:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:05.257096 | orchestrator | 2025-05-14 02:46:05 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:05.257239 | orchestrator | 2025-05-14 02:46:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:05.257685 | orchestrator | 2025-05-14 02:46:05 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:05.258574 | orchestrator | 2025-05-14 02:46:05 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:05.259875 | orchestrator | 2025-05-14 02:46:05 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:05.259916 | orchestrator | 2025-05-14 02:46:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:08.311253 | orchestrator | 2025-05-14 02:46:08 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:08.313982 | orchestrator | 2025-05-14 02:46:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:08.317124 | orchestrator | 2025-05-14 02:46:08 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:08.318534 | orchestrator | 2025-05-14 02:46:08 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:08.320200 | orchestrator | 2025-05-14 02:46:08 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:08.320274 | orchestrator | 2025-05-14 02:46:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:11.372307 | orchestrator | 2025-05-14 02:46:11 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:11.372633 | orchestrator | 2025-05-14 02:46:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:11.372664 | orchestrator | 2025-05-14 02:46:11 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:11.372676 | orchestrator | 2025-05-14 02:46:11 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:11.372809 | orchestrator | 2025-05-14 02:46:11 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:11.372826 | orchestrator | 2025-05-14 02:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:14.412773 | orchestrator | 2025-05-14 02:46:14 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:14.412928 | orchestrator | 2025-05-14 02:46:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:14.413017 | orchestrator | 2025-05-14 02:46:14 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:14.417083 | orchestrator | 2025-05-14 02:46:14 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:14.418808 | orchestrator | 2025-05-14 02:46:14 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:14.418865 | orchestrator | 2025-05-14 02:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:17.472043 | orchestrator | 2025-05-14 02:46:17 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:17.472145 | orchestrator | 2025-05-14 02:46:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:17.472160 | orchestrator | 2025-05-14 02:46:17 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:17.475167 | orchestrator | 2025-05-14 02:46:17 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:17.475199 | orchestrator | 2025-05-14 02:46:17 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:17.475212 | orchestrator | 2025-05-14 02:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:20.503182 | orchestrator | 2025-05-14 02:46:20 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:20.503948 | orchestrator | 2025-05-14 02:46:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:20.504481 | orchestrator | 2025-05-14 02:46:20 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:20.505663 | orchestrator | 2025-05-14 02:46:20 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:20.506741 | orchestrator | 2025-05-14 02:46:20 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:20.506793 | orchestrator | 2025-05-14 02:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:23.552862 | orchestrator | 2025-05-14 02:46:23 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:23.553841 | orchestrator | 2025-05-14 02:46:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:23.555605 | orchestrator | 2025-05-14 02:46:23 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:23.557939 | orchestrator | 2025-05-14 02:46:23 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:23.558887 | orchestrator | 2025-05-14 02:46:23 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:23.558935 | orchestrator | 2025-05-14 02:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:26.608436 | orchestrator | 2025-05-14 02:46:26 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:26.610655 | orchestrator | 2025-05-14 02:46:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:26.613856 | orchestrator | 2025-05-14 02:46:26 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:26.615773 | orchestrator | 2025-05-14 02:46:26 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:26.617752 | orchestrator | 2025-05-14 02:46:26 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:26.618124 | orchestrator | 2025-05-14 02:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:29.660253 | orchestrator | 2025-05-14 02:46:29 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:29.661748 | orchestrator | 2025-05-14 02:46:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:29.666104 | orchestrator | 2025-05-14 02:46:29 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:29.668512 | orchestrator | 2025-05-14 02:46:29 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:29.670151 | orchestrator | 2025-05-14 02:46:29 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:29.670809 | orchestrator | 2025-05-14 02:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:32.712732 | orchestrator | 2025-05-14 02:46:32 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:32.714174 | orchestrator | 2025-05-14 02:46:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:32.715098 | orchestrator | 2025-05-14 02:46:32 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:32.716448 | orchestrator | 2025-05-14 02:46:32 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:32.717767 | orchestrator | 2025-05-14 02:46:32 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:32.717924 | orchestrator | 2025-05-14 02:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:35.763769 | orchestrator | 2025-05-14 02:46:35 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:35.763986 | orchestrator | 2025-05-14 02:46:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:35.769054 | orchestrator | 2025-05-14 02:46:35 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:35.770966 | orchestrator | 2025-05-14 02:46:35 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:35.773672 | orchestrator | 2025-05-14 02:46:35 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:35.773726 | orchestrator | 2025-05-14 02:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:38.818630 | orchestrator | 2025-05-14 02:46:38 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:38.819547 | orchestrator | 2025-05-14 02:46:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:38.821586 | orchestrator | 2025-05-14 02:46:38 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:38.823095 | orchestrator | 2025-05-14 02:46:38 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:38.823937 | orchestrator | 2025-05-14 02:46:38 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:38.823968 | orchestrator | 2025-05-14 02:46:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:41.875316 | orchestrator | 2025-05-14 02:46:41 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:41.875550 | orchestrator | 2025-05-14 02:46:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:41.875580 | orchestrator | 2025-05-14 02:46:41 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:41.876816 | orchestrator | 2025-05-14 02:46:41 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:41.878515 | orchestrator | 2025-05-14 02:46:41 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:41.878578 | orchestrator | 2025-05-14 02:46:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:44.934186 | orchestrator | 2025-05-14 02:46:44 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state STARTED 2025-05-14 02:46:44.938816 | orchestrator | 2025-05-14 02:46:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:44.939493 | orchestrator | 2025-05-14 02:46:44 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:44.941413 | orchestrator | 2025-05-14 02:46:44 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:44.943151 | orchestrator | 2025-05-14 02:46:44 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:44.943394 | orchestrator | 2025-05-14 02:46:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:47.993606 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task da89b621-4308-4f00-b93e-b72d0ea2b53c is in state SUCCESS 2025-05-14 02:46:47.995356 | orchestrator | 2025-05-14 02:46:47.995411 | orchestrator | 2025-05-14 02:46:47.995422 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:46:47.995431 | orchestrator | 2025-05-14 02:46:47.995438 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:46:47.995447 | orchestrator | Wednesday 14 May 2025 02:42:35 +0000 (0:00:00.991) 0:00:00.991 ********* 2025-05-14 02:46:47.995455 | orchestrator | ok: [testbed-manager] 2025-05-14 02:46:47.995463 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:46:47.995502 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:46:47.995511 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:46:47.995519 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:46:47.995526 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:46:47.995534 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:46:47.995542 | orchestrator | 2025-05-14 02:46:47.995570 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:46:47.995581 | orchestrator | Wednesday 14 May 2025 02:42:37 +0000 (0:00:02.470) 0:00:03.462 ********* 2025-05-14 02:46:47.995590 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-14 02:46:47.995598 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-14 02:46:47.995606 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-14 02:46:47.995615 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-14 02:46:47.995623 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-14 02:46:47.995631 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-14 02:46:47.995640 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-14 02:46:47.995675 | orchestrator | 2025-05-14 02:46:47.995685 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-14 02:46:47.995694 | orchestrator | 2025-05-14 02:46:47.995703 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-14 02:46:47.995713 | orchestrator | Wednesday 14 May 2025 02:42:38 +0000 (0:00:00.897) 0:00:04.359 ********* 2025-05-14 02:46:47.995722 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:46:47.995865 | orchestrator | 2025-05-14 02:46:47.995878 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-14 02:46:47.995887 | orchestrator | Wednesday 14 May 2025 02:42:40 +0000 (0:00:01.923) 0:00:06.283 ********* 2025-05-14 02:46:47.995920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.995936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.995946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.996015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.996028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.996089 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:46:47.996123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.996133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.996150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.996159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.996168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.996208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.996239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996262 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.996272 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996281 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.996307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.996317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.996355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.996379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.996403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.996417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.996427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.996450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.996460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.996475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996483 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.996494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.996518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.996543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.996551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.996562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.996571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.996579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.996619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.996627 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.996639 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:46:47.996648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.997485 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.997521 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.997527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.997538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.997544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.997549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.997567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.997573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.997580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.997586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.997591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.997624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.997633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.997640 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.997652 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.997659 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.997667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.997675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.997693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.997701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.997709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.997716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.997727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.997735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.997743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.997757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.997769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.997777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.997785 | orchestrator | 2025-05-14 02:46:47.997793 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-14 02:46:47.997802 | orchestrator | Wednesday 14 May 2025 02:42:44 +0000 (0:00:04.303) 0:00:10.586 ********* 2025-05-14 02:46:47.997810 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:46:47.997818 | orchestrator | 2025-05-14 02:46:47.997825 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-14 02:46:47.997833 | orchestrator | Wednesday 14 May 2025 02:42:47 +0000 (0:00:03.141) 0:00:13.727 ********* 2025-05-14 02:46:47.997844 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:46:47.997853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.997860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.997874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.997886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.997894 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.997904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.997913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.997924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.997933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.997947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.997956 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.997968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.997977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.997985 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.997993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.998008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.998088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.998107 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:46:47.998126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.998179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.998189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.998199 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.998376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.998404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.998412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.998427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.998435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.998444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.998452 | orchestrator | 2025-05-14 02:46:47.998460 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-14 02:46:47.998469 | orchestrator | Wednesday 14 May 2025 02:42:55 +0000 (0:00:07.767) 0:00:21.494 ********* 2025-05-14 02:46:47.998483 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.998495 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.998501 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998511 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.998518 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.998556 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:46:47.998562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998592 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:47.998597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.998608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.998613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998642 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:47.998648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998664 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:47.998677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.998685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998702 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:47.998711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.998725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998753 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:47.998774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.998780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998795 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:47.998800 | orchestrator | 2025-05-14 02:46:47.998805 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-14 02:46:47.998811 | orchestrator | Wednesday 14 May 2025 02:42:58 +0000 (0:00:02.614) 0:00:24.109 ********* 2025-05-14 02:46:47.998816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.998826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998864 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.998878 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.998887 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998895 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:47.998909 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.998919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.998933 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998949 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:46:47.998962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.998981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.998989 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:47.998997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.999005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.999034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999049 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:47.999057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.999065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.999077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.999086 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:47.999094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.999102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:46:47.999124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.999134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.999148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.999156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.999164 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:47.999172 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:47.999180 | orchestrator | 2025-05-14 02:46:47.999188 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-14 02:46:47.999195 | orchestrator | Wednesday 14 May 2025 02:43:00 +0000 (0:00:02.569) 0:00:26.678 ********* 2025-05-14 02:46:47.999206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.999214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.999226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.999238 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:46:47.999245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.999256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.999264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.999271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:47.999286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.999294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.999302 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.999309 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.999379 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.999429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.999437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:47.999467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.999504 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.999513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.999522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.999535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.999545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.999590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.999623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.999635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.999644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.999659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.999678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.999700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.999708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.999743 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:46:47.999752 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.999764 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.999787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.999796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.999824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.999835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.999849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.999857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.999878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.999886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.999898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.999911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999919 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.999927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:47.999939 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:47.999947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:47.999955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:47.999979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:47.999987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:47.999995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.000008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:48.000017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.000025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.000033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.000048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:48.000057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.000065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.000077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.000085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:48.000094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.000102 | orchestrator | 2025-05-14 02:46:48.000110 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-14 02:46:48.000118 | orchestrator | Wednesday 14 May 2025 02:43:07 +0000 (0:00:07.191) 0:00:33.870 ********* 2025-05-14 02:46:48.000126 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:46:48.000134 | orchestrator | 2025-05-14 02:46:48.000142 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-14 02:46:48.000150 | orchestrator | Wednesday 14 May 2025 02:43:08 +0000 (0:00:01.017) 0:00:34.887 ********* 2025-05-14 02:46:48.000166 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090410, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3046043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000177 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090410, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3046043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000185 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090410, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3046043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000193 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090410, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3046043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000207 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090410, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3046043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000215 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090410, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3046043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000222 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090418, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000235 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090418, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000246 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090418, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000254 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090418, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000262 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090418, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000275 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090418, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000283 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090413, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000291 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090413, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000303 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090410, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3046043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.000314 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090413, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000321 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090413, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000350 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090416, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000363 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090416, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000372 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090413, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000379 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090413, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000395 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090416, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000406 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090416, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000415 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090416, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000423 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090456, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000568 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090456, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000583 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090416, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000600 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090456, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000608 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090456, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000621 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090421, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000629 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090456, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000637 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090418, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.000650 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090421, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000658 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090421, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000671 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090421, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000679 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090415, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000692 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090456, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000699 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090421, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000708 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090415, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000721 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090420, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000730 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090415, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000746 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090415, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000754 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090415, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000766 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090421, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000775 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090420, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000784 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090420, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000796 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090453, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000804 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090420, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000818 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090453, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000827 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090420, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000841 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090415, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000849 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090414, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000858 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090413, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.000870 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090453, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000884 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090453, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000892 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090414, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000899 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090420, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000910 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090453, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000918 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090425, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3076043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000926 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.000935 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090414, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000947 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090453, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000961 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090414, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000969 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090425, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3076043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000977 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.000986 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090414, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.000998 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090425, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3076043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.001006 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.001015 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090414, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.001023 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090425, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3076043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.001036 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.001049 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090425, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3076043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.001057 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.001064 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090425, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3076043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:46:48.001072 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.001079 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090416, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.001087 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090456, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.001099 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090421, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.001106 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090415, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.001114 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090420, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3066044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.001132 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090453, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3246047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.001140 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090414, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3056045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.001148 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090425, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.3076043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:46:48.001156 | orchestrator | 2025-05-14 02:46:48.001164 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-14 02:46:48.001172 | orchestrator | Wednesday 14 May 2025 02:43:42 +0000 (0:00:33.565) 0:01:08.453 ********* 2025-05-14 02:46:48.001179 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:46:48.001187 | orchestrator | 2025-05-14 02:46:48.001195 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-14 02:46:48.001200 | orchestrator | Wednesday 14 May 2025 02:43:42 +0000 (0:00:00.367) 0:01:08.821 ********* 2025-05-14 02:46:48.001206 | orchestrator | [WARNING]: Skipped 2025-05-14 02:46:48.001212 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001218 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-14 02:46:48.001223 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001229 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-14 02:46:48.001234 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:46:48.001243 | orchestrator | [WARNING]: Skipped 2025-05-14 02:46:48.001248 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001254 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-14 02:46:48.001259 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001265 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-14 02:46:48.001270 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:46:48.001276 | orchestrator | [WARNING]: Skipped 2025-05-14 02:46:48.001281 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001387 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-14 02:46:48.001394 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001398 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-14 02:46:48.001403 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 02:46:48.001408 | orchestrator | [WARNING]: Skipped 2025-05-14 02:46:48.001413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001417 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-14 02:46:48.001422 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001426 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-14 02:46:48.001431 | orchestrator | [WARNING]: Skipped 2025-05-14 02:46:48.001435 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001440 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-14 02:46:48.001445 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001449 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-14 02:46:48.001454 | orchestrator | [WARNING]: Skipped 2025-05-14 02:46:48.001459 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001463 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-14 02:46:48.001468 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001472 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-14 02:46:48.001477 | orchestrator | [WARNING]: Skipped 2025-05-14 02:46:48.001481 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001486 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-14 02:46:48.001495 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:46:48.001500 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-14 02:46:48.001504 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 02:46:48.001509 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:46:48.001513 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 02:46:48.001518 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 02:46:48.001522 | orchestrator | 2025-05-14 02:46:48.001527 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-14 02:46:48.001532 | orchestrator | Wednesday 14 May 2025 02:43:44 +0000 (0:00:01.170) 0:01:09.991 ********* 2025-05-14 02:46:48.001536 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:46:48.001542 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.001546 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:46:48.001551 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.001556 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:46:48.001560 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.001564 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:46:48.001568 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.001572 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:46:48.001576 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.001581 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:46:48.001585 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.001589 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-14 02:46:48.001598 | orchestrator | 2025-05-14 02:46:48.001602 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-14 02:46:48.001607 | orchestrator | Wednesday 14 May 2025 02:44:03 +0000 (0:00:19.503) 0:01:29.494 ********* 2025-05-14 02:46:48.001611 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:46:48.001615 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.001619 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:46:48.001623 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.001628 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:46:48.001632 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.001636 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:46:48.001640 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.001647 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:46:48.001652 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.001659 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:46:48.001667 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.001673 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-14 02:46:48.001680 | orchestrator | 2025-05-14 02:46:48.001687 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-14 02:46:48.001694 | orchestrator | Wednesday 14 May 2025 02:44:07 +0000 (0:00:04.225) 0:01:33.719 ********* 2025-05-14 02:46:48.001700 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:46:48.001707 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.001716 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:46:48.001721 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.001725 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:46:48.001729 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.001733 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:46:48.001737 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.001741 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:46:48.001745 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.001750 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:46:48.001754 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.001762 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-14 02:46:48.001769 | orchestrator | 2025-05-14 02:46:48.001776 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-14 02:46:48.001782 | orchestrator | Wednesday 14 May 2025 02:44:11 +0000 (0:00:03.449) 0:01:37.169 ********* 2025-05-14 02:46:48.001789 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:46:48.001796 | orchestrator | 2025-05-14 02:46:48.001807 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-14 02:46:48.001814 | orchestrator | Wednesday 14 May 2025 02:44:11 +0000 (0:00:00.456) 0:01:37.625 ********* 2025-05-14 02:46:48.001821 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:46:48.001832 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.001839 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.001846 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.001852 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.001859 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.001866 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.001873 | orchestrator | 2025-05-14 02:46:48.001880 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-14 02:46:48.001887 | orchestrator | Wednesday 14 May 2025 02:44:12 +0000 (0:00:00.807) 0:01:38.433 ********* 2025-05-14 02:46:48.001893 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:46:48.001900 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.001907 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.001914 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.001921 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:46:48.001927 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:46:48.001934 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:46:48.001941 | orchestrator | 2025-05-14 02:46:48.001948 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-14 02:46:48.001955 | orchestrator | Wednesday 14 May 2025 02:44:16 +0000 (0:00:03.830) 0:01:42.264 ********* 2025-05-14 02:46:48.001961 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:46:48.001969 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.001976 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:46:48.001983 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.001991 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:46:48.001998 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.002005 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:46:48.002037 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.002048 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:46:48.002055 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.002062 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:46:48.002069 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.002076 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:46:48.002083 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:46:48.002090 | orchestrator | 2025-05-14 02:46:48.002097 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-14 02:46:48.002104 | orchestrator | Wednesday 14 May 2025 02:44:19 +0000 (0:00:03.347) 0:01:45.612 ********* 2025-05-14 02:46:48.002116 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:46:48.002123 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.002130 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:46:48.002137 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.002144 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:46:48.002151 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.002158 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:46:48.002165 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.002172 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:46:48.002179 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.002193 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:46:48.002200 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.002207 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-14 02:46:48.002215 | orchestrator | 2025-05-14 02:46:48.002222 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-14 02:46:48.002230 | orchestrator | Wednesday 14 May 2025 02:44:23 +0000 (0:00:04.267) 0:01:49.879 ********* 2025-05-14 02:46:48.002237 | orchestrator | [WARNING]: Skipped 2025-05-14 02:46:48.002244 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-14 02:46:48.002251 | orchestrator | due to this access issue: 2025-05-14 02:46:48.002259 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-14 02:46:48.002266 | orchestrator | not a directory 2025-05-14 02:46:48.002273 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:46:48.002280 | orchestrator | 2025-05-14 02:46:48.002286 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-14 02:46:48.002293 | orchestrator | Wednesday 14 May 2025 02:44:25 +0000 (0:00:01.756) 0:01:51.635 ********* 2025-05-14 02:46:48.002299 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:46:48.002305 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.002312 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.002318 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.002340 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.002347 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.002362 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.002369 | orchestrator | 2025-05-14 02:46:48.002375 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-14 02:46:48.002382 | orchestrator | Wednesday 14 May 2025 02:44:27 +0000 (0:00:01.588) 0:01:53.224 ********* 2025-05-14 02:46:48.002388 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:46:48.002395 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.002401 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.002408 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.002414 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.002421 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.002428 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.002435 | orchestrator | 2025-05-14 02:46:48.002441 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-14 02:46:48.002448 | orchestrator | Wednesday 14 May 2025 02:44:28 +0000 (0:00:00.939) 0:01:54.164 ********* 2025-05-14 02:46:48.002455 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:46:48.002462 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.002469 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:46:48.002476 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.002483 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:46:48.002490 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.002497 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:46:48.002505 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.002512 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:46:48.002521 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.002528 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:46:48.002535 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:46:48.002543 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:46:48.002561 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.002569 | orchestrator | 2025-05-14 02:46:48.002576 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-14 02:46:48.002584 | orchestrator | Wednesday 14 May 2025 02:44:32 +0000 (0:00:04.291) 0:01:58.455 ********* 2025-05-14 02:46:48.002590 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:46:48.002597 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:46:48.002604 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:46:48.002612 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:46:48.002624 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:46:48.002631 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:46:48.002638 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:46:48.002645 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:46:48.002652 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:46:48.002659 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:46:48.002665 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:46:48.002671 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:46:48.002678 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:46:48.002685 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:46:48.002692 | orchestrator | 2025-05-14 02:46:48.002699 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-14 02:46:48.002706 | orchestrator | Wednesday 14 May 2025 02:44:35 +0000 (0:00:03.185) 0:02:01.641 ********* 2025-05-14 02:46:48.002715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:48.002730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:48.002738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:48.002752 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:46:48.002763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:48.002770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:48.002781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:48.002789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:48.002802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:46:48.002809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:48.002819 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:48.002826 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.002834 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.002841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:48.002852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.002864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.002871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:48.002879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.002889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.002896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.002904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.002912 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:48.002924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.002936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:46:48.002944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.002952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.002961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:48.002970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:48.002983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:48.002996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003010 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:48.003020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:48.003029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:48.003097 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:46:48.003111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003130 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:48.003137 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.003155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.003169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.003176 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:48.003189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:48.003196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:48.003203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:48.003234 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.003242 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:48.003247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003254 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:48.003263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:48.003279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:48.003286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:48.003291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:48.003295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:48.003305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:48.003310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:46:48.003314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:46:48.003362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:46:48.003370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:46:48.003380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.003390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:48.003398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.003410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:48.003425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:46:48.003434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:46:48.003445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:46:48.003450 | orchestrator | 2025-05-14 02:46:48.003454 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-14 02:46:48.003458 | orchestrator | Wednesday 14 May 2025 02:44:41 +0000 (0:00:05.733) 0:02:07.374 ********* 2025-05-14 02:46:48.003462 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-14 02:46:48.003467 | orchestrator | 2025-05-14 02:46:48.003471 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:46:48.003478 | orchestrator | Wednesday 14 May 2025 02:44:44 +0000 (0:00:03.007) 0:02:10.382 ********* 2025-05-14 02:46:48.003483 | orchestrator | 2025-05-14 02:46:48.003487 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:46:48.003491 | orchestrator | Wednesday 14 May 2025 02:44:44 +0000 (0:00:00.126) 0:02:10.509 ********* 2025-05-14 02:46:48.003495 | orchestrator | 2025-05-14 02:46:48.003500 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:46:48.003504 | orchestrator | Wednesday 14 May 2025 02:44:44 +0000 (0:00:00.416) 0:02:10.925 ********* 2025-05-14 02:46:48.003508 | orchestrator | 2025-05-14 02:46:48.003512 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:46:48.003517 | orchestrator | Wednesday 14 May 2025 02:44:45 +0000 (0:00:00.068) 0:02:10.994 ********* 2025-05-14 02:46:48.003521 | orchestrator | 2025-05-14 02:46:48.003525 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:46:48.003529 | orchestrator | Wednesday 14 May 2025 02:44:45 +0000 (0:00:00.062) 0:02:11.056 ********* 2025-05-14 02:46:48.003533 | orchestrator | 2025-05-14 02:46:48.003537 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:46:48.003541 | orchestrator | Wednesday 14 May 2025 02:44:45 +0000 (0:00:00.069) 0:02:11.126 ********* 2025-05-14 02:46:48.003546 | orchestrator | 2025-05-14 02:46:48.003550 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:46:48.003554 | orchestrator | Wednesday 14 May 2025 02:44:45 +0000 (0:00:00.323) 0:02:11.450 ********* 2025-05-14 02:46:48.003558 | orchestrator | 2025-05-14 02:46:48.003562 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-14 02:46:48.003566 | orchestrator | Wednesday 14 May 2025 02:44:45 +0000 (0:00:00.091) 0:02:11.541 ********* 2025-05-14 02:46:48.003570 | orchestrator | changed: [testbed-manager] 2025-05-14 02:46:48.003574 | orchestrator | 2025-05-14 02:46:48.003578 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-14 02:46:48.003582 | orchestrator | Wednesday 14 May 2025 02:45:03 +0000 (0:00:17.631) 0:02:29.173 ********* 2025-05-14 02:46:48.003587 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:46:48.003591 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:46:48.003597 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:46:48.003601 | orchestrator | changed: [testbed-manager] 2025-05-14 02:46:48.003605 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:46:48.003609 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:46:48.003613 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:46:48.003617 | orchestrator | 2025-05-14 02:46:48.003622 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-14 02:46:48.003626 | orchestrator | Wednesday 14 May 2025 02:45:26 +0000 (0:00:23.332) 0:02:52.505 ********* 2025-05-14 02:46:48.003630 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:46:48.003634 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:46:48.003638 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:46:48.003642 | orchestrator | 2025-05-14 02:46:48.003646 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-14 02:46:48.003651 | orchestrator | Wednesday 14 May 2025 02:45:38 +0000 (0:00:12.406) 0:03:04.912 ********* 2025-05-14 02:46:48.003658 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:46:48.003665 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:46:48.003672 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:46:48.003678 | orchestrator | 2025-05-14 02:46:48.003684 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-14 02:46:48.003691 | orchestrator | Wednesday 14 May 2025 02:45:53 +0000 (0:00:14.231) 0:03:19.144 ********* 2025-05-14 02:46:48.003698 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:46:48.003704 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:46:48.003708 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:46:48.003712 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:46:48.003716 | orchestrator | changed: [testbed-manager] 2025-05-14 02:46:48.003723 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:46:48.003727 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:46:48.003732 | orchestrator | 2025-05-14 02:46:48.003736 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-14 02:46:48.003740 | orchestrator | Wednesday 14 May 2025 02:46:10 +0000 (0:00:17.141) 0:03:36.285 ********* 2025-05-14 02:46:48.003744 | orchestrator | changed: [testbed-manager] 2025-05-14 02:46:48.003748 | orchestrator | 2025-05-14 02:46:48.003752 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-14 02:46:48.003756 | orchestrator | Wednesday 14 May 2025 02:46:19 +0000 (0:00:09.094) 0:03:45.379 ********* 2025-05-14 02:46:48.003760 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:46:48.003765 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:46:48.003769 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:46:48.003773 | orchestrator | 2025-05-14 02:46:48.003777 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-14 02:46:48.003781 | orchestrator | Wednesday 14 May 2025 02:46:26 +0000 (0:00:06.568) 0:03:51.948 ********* 2025-05-14 02:46:48.003785 | orchestrator | changed: [testbed-manager] 2025-05-14 02:46:48.003789 | orchestrator | 2025-05-14 02:46:48.003793 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-14 02:46:48.003798 | orchestrator | Wednesday 14 May 2025 02:46:33 +0000 (0:00:07.354) 0:03:59.303 ********* 2025-05-14 02:46:48.003802 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:46:48.003806 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:46:48.003810 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:46:48.003814 | orchestrator | 2025-05-14 02:46:48.003820 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:46:48.003825 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-14 02:46:48.003830 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-14 02:46:48.003834 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-14 02:46:48.003839 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-14 02:46:48.003843 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 02:46:48.003847 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 02:46:48.003851 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 02:46:48.003855 | orchestrator | 2025-05-14 02:46:48.003859 | orchestrator | 2025-05-14 02:46:48.003863 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:46:48.003868 | orchestrator | Wednesday 14 May 2025 02:46:44 +0000 (0:00:11.218) 0:04:10.521 ********* 2025-05-14 02:46:48.003872 | orchestrator | =============================================================================== 2025-05-14 02:46:48.003876 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 33.57s 2025-05-14 02:46:48.003880 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 23.33s 2025-05-14 02:46:48.003884 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.50s 2025-05-14 02:46:48.003888 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.63s 2025-05-14 02:46:48.003893 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.14s 2025-05-14 02:46:48.003899 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 14.23s 2025-05-14 02:46:48.003906 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.41s 2025-05-14 02:46:48.003910 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.22s 2025-05-14 02:46:48.003914 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.09s 2025-05-14 02:46:48.003918 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.77s 2025-05-14 02:46:48.003923 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 7.35s 2025-05-14 02:46:48.003927 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.19s 2025-05-14 02:46:48.003931 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.57s 2025-05-14 02:46:48.003935 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.73s 2025-05-14 02:46:48.003939 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.30s 2025-05-14 02:46:48.003943 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 4.29s 2025-05-14 02:46:48.003947 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.27s 2025-05-14 02:46:48.003951 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.23s 2025-05-14 02:46:48.003955 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.83s 2025-05-14 02:46:48.003960 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.45s 2025-05-14 02:46:48.003964 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:48.003968 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:48.003972 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:48.003977 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:48.003981 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:46:48.003985 | orchestrator | 2025-05-14 02:46:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:51.045716 | orchestrator | 2025-05-14 02:46:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:51.047308 | orchestrator | 2025-05-14 02:46:51 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:51.048676 | orchestrator | 2025-05-14 02:46:51 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:51.050906 | orchestrator | 2025-05-14 02:46:51 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:51.052979 | orchestrator | 2025-05-14 02:46:51 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:46:51.053022 | orchestrator | 2025-05-14 02:46:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:54.094308 | orchestrator | 2025-05-14 02:46:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:54.094508 | orchestrator | 2025-05-14 02:46:54 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:54.094519 | orchestrator | 2025-05-14 02:46:54 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:54.094886 | orchestrator | 2025-05-14 02:46:54 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:54.095420 | orchestrator | 2025-05-14 02:46:54 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:46:54.095509 | orchestrator | 2025-05-14 02:46:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:57.128694 | orchestrator | 2025-05-14 02:46:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:46:57.128913 | orchestrator | 2025-05-14 02:46:57 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:46:57.129499 | orchestrator | 2025-05-14 02:46:57 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:46:57.131552 | orchestrator | 2025-05-14 02:46:57 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:46:57.132253 | orchestrator | 2025-05-14 02:46:57 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:46:57.132291 | orchestrator | 2025-05-14 02:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:00.187663 | orchestrator | 2025-05-14 02:47:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:00.189567 | orchestrator | 2025-05-14 02:47:00 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:47:00.190642 | orchestrator | 2025-05-14 02:47:00 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:47:00.191440 | orchestrator | 2025-05-14 02:47:00 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:00.193118 | orchestrator | 2025-05-14 02:47:00 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:00.193141 | orchestrator | 2025-05-14 02:47:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:03.248154 | orchestrator | 2025-05-14 02:47:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:03.248264 | orchestrator | 2025-05-14 02:47:03 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:47:03.250496 | orchestrator | 2025-05-14 02:47:03 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:47:03.250566 | orchestrator | 2025-05-14 02:47:03 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:03.250575 | orchestrator | 2025-05-14 02:47:03 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:03.250583 | orchestrator | 2025-05-14 02:47:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:06.301170 | orchestrator | 2025-05-14 02:47:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:06.301272 | orchestrator | 2025-05-14 02:47:06 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:47:06.301286 | orchestrator | 2025-05-14 02:47:06 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:47:06.301298 | orchestrator | 2025-05-14 02:47:06 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:06.301474 | orchestrator | 2025-05-14 02:47:06 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:06.301499 | orchestrator | 2025-05-14 02:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:09.368700 | orchestrator | 2025-05-14 02:47:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:09.368811 | orchestrator | 2025-05-14 02:47:09 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:47:09.369017 | orchestrator | 2025-05-14 02:47:09 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:47:09.369583 | orchestrator | 2025-05-14 02:47:09 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:09.373161 | orchestrator | 2025-05-14 02:47:09 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:09.373240 | orchestrator | 2025-05-14 02:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:12.404725 | orchestrator | 2025-05-14 02:47:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:12.404863 | orchestrator | 2025-05-14 02:47:12 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:47:12.405530 | orchestrator | 2025-05-14 02:47:12 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:47:12.405970 | orchestrator | 2025-05-14 02:47:12 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:12.406704 | orchestrator | 2025-05-14 02:47:12 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:12.406737 | orchestrator | 2025-05-14 02:47:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:15.459756 | orchestrator | 2025-05-14 02:47:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:15.461889 | orchestrator | 2025-05-14 02:47:15 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:47:15.462825 | orchestrator | 2025-05-14 02:47:15 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:47:15.466227 | orchestrator | 2025-05-14 02:47:15 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:15.467491 | orchestrator | 2025-05-14 02:47:15 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:15.467536 | orchestrator | 2025-05-14 02:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:18.515183 | orchestrator | 2025-05-14 02:47:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:18.515990 | orchestrator | 2025-05-14 02:47:18 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:47:18.517692 | orchestrator | 2025-05-14 02:47:18 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:47:18.518884 | orchestrator | 2025-05-14 02:47:18 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:18.520113 | orchestrator | 2025-05-14 02:47:18 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:18.520147 | orchestrator | 2025-05-14 02:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:21.568802 | orchestrator | 2025-05-14 02:47:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:21.570723 | orchestrator | 2025-05-14 02:47:21 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:47:21.572547 | orchestrator | 2025-05-14 02:47:21 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state STARTED 2025-05-14 02:47:21.574284 | orchestrator | 2025-05-14 02:47:21 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:21.575928 | orchestrator | 2025-05-14 02:47:21 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:21.576139 | orchestrator | 2025-05-14 02:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:24.634277 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:24.636050 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:24.637487 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:47:24.640344 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task a53932aa-3849-4f73-8ba6-91de668150b6 is in state SUCCESS 2025-05-14 02:47:24.642264 | orchestrator | 2025-05-14 02:47:24.642348 | orchestrator | 2025-05-14 02:47:24.642368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:47:24.642387 | orchestrator | 2025-05-14 02:47:24.642403 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:47:24.642419 | orchestrator | Wednesday 14 May 2025 02:44:10 +0000 (0:00:00.269) 0:00:00.269 ********* 2025-05-14 02:47:24.642436 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:47:24.642455 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:47:24.642471 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:47:24.642488 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:47:24.642506 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:47:24.642524 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:47:24.642541 | orchestrator | 2025-05-14 02:47:24.642578 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:47:24.642597 | orchestrator | Wednesday 14 May 2025 02:44:10 +0000 (0:00:00.544) 0:00:00.814 ********* 2025-05-14 02:47:24.642617 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-14 02:47:24.642636 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-14 02:47:24.642654 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-14 02:47:24.642774 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-14 02:47:24.642795 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-14 02:47:24.642807 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-14 02:47:24.642817 | orchestrator | 2025-05-14 02:47:24.642829 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-14 02:47:24.642840 | orchestrator | 2025-05-14 02:47:24.642851 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 02:47:24.642864 | orchestrator | Wednesday 14 May 2025 02:44:11 +0000 (0:00:00.675) 0:00:01.490 ********* 2025-05-14 02:47:24.642878 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:47:24.642892 | orchestrator | 2025-05-14 02:47:24.642904 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-14 02:47:24.642917 | orchestrator | Wednesday 14 May 2025 02:44:12 +0000 (0:00:01.358) 0:00:02.848 ********* 2025-05-14 02:47:24.642929 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-14 02:47:24.642942 | orchestrator | 2025-05-14 02:47:24.642954 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-14 02:47:24.642967 | orchestrator | Wednesday 14 May 2025 02:44:16 +0000 (0:00:03.625) 0:00:06.474 ********* 2025-05-14 02:47:24.642979 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-14 02:47:24.642992 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-14 02:47:24.643004 | orchestrator | 2025-05-14 02:47:24.643016 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-14 02:47:24.643028 | orchestrator | Wednesday 14 May 2025 02:44:23 +0000 (0:00:06.855) 0:00:13.330 ********* 2025-05-14 02:47:24.643041 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:47:24.643054 | orchestrator | 2025-05-14 02:47:24.643067 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-14 02:47:24.643079 | orchestrator | Wednesday 14 May 2025 02:44:27 +0000 (0:00:03.673) 0:00:17.003 ********* 2025-05-14 02:47:24.643091 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:47:24.643491 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-14 02:47:24.643535 | orchestrator | 2025-05-14 02:47:24.643548 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-14 02:47:24.643559 | orchestrator | Wednesday 14 May 2025 02:44:31 +0000 (0:00:04.123) 0:00:21.126 ********* 2025-05-14 02:47:24.643570 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:47:24.643581 | orchestrator | 2025-05-14 02:47:24.643592 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-14 02:47:24.643603 | orchestrator | Wednesday 14 May 2025 02:44:34 +0000 (0:00:03.586) 0:00:24.713 ********* 2025-05-14 02:47:24.643614 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-14 02:47:24.643625 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-14 02:47:24.643636 | orchestrator | 2025-05-14 02:47:24.643646 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-14 02:47:24.643657 | orchestrator | Wednesday 14 May 2025 02:44:43 +0000 (0:00:09.032) 0:00:33.745 ********* 2025-05-14 02:47:24.643692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.643717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.643731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.643744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.643769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.643781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.643806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.643819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.643830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.643849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.643862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.643882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.643898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.643910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.643929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.643941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.643959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.643975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.643987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.643999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.644018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.644030 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.644053 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.644065 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.644077 | orchestrator | 2025-05-14 02:47:24.644088 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 02:47:24.644105 | orchestrator | Wednesday 14 May 2025 02:44:46 +0000 (0:00:02.256) 0:00:36.002 ********* 2025-05-14 02:47:24.644117 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:24.644128 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:24.644139 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:24.644150 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:47:24.644161 | orchestrator | 2025-05-14 02:47:24.644172 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-14 02:47:24.644183 | orchestrator | Wednesday 14 May 2025 02:44:47 +0000 (0:00:01.212) 0:00:37.214 ********* 2025-05-14 02:47:24.644194 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-14 02:47:24.644205 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-14 02:47:24.644216 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-14 02:47:24.644227 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-14 02:47:24.644238 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-14 02:47:24.644249 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-14 02:47:24.644260 | orchestrator | 2025-05-14 02:47:24.644270 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-14 02:47:24.644282 | orchestrator | Wednesday 14 May 2025 02:44:50 +0000 (0:00:03.272) 0:00:40.487 ********* 2025-05-14 02:47:24.644324 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:47:24.644346 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:47:24.644373 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:47:24.644399 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:47:24.644411 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:47:24.644422 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:47:24.644434 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:47:24.644458 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:47:24.644481 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:47:24.644494 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:47:24.644506 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:47:24.645095 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:47:24.645118 | orchestrator | 2025-05-14 02:47:24.645129 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-14 02:47:24.645141 | orchestrator | Wednesday 14 May 2025 02:44:54 +0000 (0:00:03.593) 0:00:44.080 ********* 2025-05-14 02:47:24.645152 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:47:24.645164 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:47:24.645191 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:47:24.645203 | orchestrator | 2025-05-14 02:47:24.645214 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-14 02:47:24.645225 | orchestrator | Wednesday 14 May 2025 02:44:56 +0000 (0:00:02.425) 0:00:46.505 ********* 2025-05-14 02:47:24.645236 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-14 02:47:24.645247 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-14 02:47:24.645258 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-14 02:47:24.645269 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 02:47:24.645280 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 02:47:24.645291 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 02:47:24.645338 | orchestrator | 2025-05-14 02:47:24.645356 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-14 02:47:24.645393 | orchestrator | Wednesday 14 May 2025 02:45:00 +0000 (0:00:03.629) 0:00:50.134 ********* 2025-05-14 02:47:24.645405 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-14 02:47:24.645416 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-14 02:47:24.645439 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-14 02:47:24.645450 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-14 02:47:24.645461 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-14 02:47:24.645472 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-14 02:47:24.645483 | orchestrator | 2025-05-14 02:47:24.645494 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-14 02:47:24.645505 | orchestrator | Wednesday 14 May 2025 02:45:01 +0000 (0:00:01.608) 0:00:51.743 ********* 2025-05-14 02:47:24.645516 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:24.645527 | orchestrator | 2025-05-14 02:47:24.645538 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-14 02:47:24.645549 | orchestrator | Wednesday 14 May 2025 02:45:01 +0000 (0:00:00.144) 0:00:51.887 ********* 2025-05-14 02:47:24.645560 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:24.645571 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:24.645582 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:24.645593 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:47:24.645604 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:47:24.645615 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:47:24.645625 | orchestrator | 2025-05-14 02:47:24.645637 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 02:47:24.645648 | orchestrator | Wednesday 14 May 2025 02:45:02 +0000 (0:00:01.057) 0:00:52.945 ********* 2025-05-14 02:47:24.645660 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:47:24.645740 | orchestrator | 2025-05-14 02:47:24.645754 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-14 02:47:24.645767 | orchestrator | Wednesday 14 May 2025 02:45:07 +0000 (0:00:04.209) 0:00:57.154 ********* 2025-05-14 02:47:24.645781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.645813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.645831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.645843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.645855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.645867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.645891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.645907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.645919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.645931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.645943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.645960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.645971 | orchestrator | 2025-05-14 02:47:24.645982 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-14 02:47:24.645993 | orchestrator | Wednesday 14 May 2025 02:45:12 +0000 (0:00:05.675) 0:01:02.829 ********* 2025-05-14 02:47:24.646015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.646079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646091 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:24.646104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.646115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646134 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:24.646145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.646166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646178 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:24.646194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646217 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:47:24.646229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646259 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:47:24.646278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646348 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:47:24.646359 | orchestrator | 2025-05-14 02:47:24.646371 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-14 02:47:24.646382 | orchestrator | Wednesday 14 May 2025 02:45:16 +0000 (0:00:03.737) 0:01:06.567 ********* 2025-05-14 02:47:24.646393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.646416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.646446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646458 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:24.646469 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:24.646486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.646498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646509 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:24.646521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646550 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:47:24.646567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646596 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:47:24.646607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646638 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:47:24.646649 | orchestrator | 2025-05-14 02:47:24.646660 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-14 02:47:24.646671 | orchestrator | Wednesday 14 May 2025 02:45:19 +0000 (0:00:03.342) 0:01:09.910 ********* 2025-05-14 02:47:24.646682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.646700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.646729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.646760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.646795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.646807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.646827 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.646839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.646857 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.646874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.646886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.646932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.646977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.646996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647019 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647037 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647065 | orchestrator | 2025-05-14 02:47:24.647076 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-14 02:47:24.647095 | orchestrator | Wednesday 14 May 2025 02:45:23 +0000 (0:00:03.627) 0:01:13.538 ********* 2025-05-14 02:47:24.647106 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-14 02:47:24.647117 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:47:24.647128 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-14 02:47:24.647139 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:47:24.647151 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-14 02:47:24.647162 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:47:24.647173 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-14 02:47:24.647184 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-14 02:47:24.647195 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-14 02:47:24.647206 | orchestrator | 2025-05-14 02:47:24.647217 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-14 02:47:24.647228 | orchestrator | Wednesday 14 May 2025 02:45:26 +0000 (0:00:02.931) 0:01:16.469 ********* 2025-05-14 02:47:24.647239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.647251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.647268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.647397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647421 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.647478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.647490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.647507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.647671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647701 | orchestrator | 2025-05-14 02:47:24.647719 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-14 02:47:24.647731 | orchestrator | Wednesday 14 May 2025 02:45:38 +0000 (0:00:12.260) 0:01:28.729 ********* 2025-05-14 02:47:24.647742 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:24.647753 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:24.647763 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:24.647773 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:47:24.647783 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:47:24.647793 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:47:24.647802 | orchestrator | 2025-05-14 02:47:24.647812 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-14 02:47:24.647826 | orchestrator | Wednesday 14 May 2025 02:45:43 +0000 (0:00:04.330) 0:01:33.060 ********* 2025-05-14 02:47:24.647837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.647848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.647915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.647962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.647995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648006 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:24.648016 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:24.648025 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:24.648036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.648046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648097 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:47:24.648117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.648128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648169 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:47:24.648201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.648218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648271 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:47:24.648286 | orchestrator | 2025-05-14 02:47:24.648328 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-14 02:47:24.648345 | orchestrator | Wednesday 14 May 2025 02:45:45 +0000 (0:00:02.188) 0:01:35.249 ********* 2025-05-14 02:47:24.648360 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:24.648374 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:24.648390 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:24.648404 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:47:24.648420 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:47:24.648435 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:47:24.648450 | orchestrator | 2025-05-14 02:47:24.648465 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-14 02:47:24.648482 | orchestrator | Wednesday 14 May 2025 02:45:46 +0000 (0:00:00.815) 0:01:36.064 ********* 2025-05-14 02:47:24.648509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.648535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.648572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:47:24.648611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.648648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.648664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:47:24.648689 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.648716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.648736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.648747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.648757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.648783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.648865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.648882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:47:24.648932 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.648949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:47:24.648967 | orchestrator | 2025-05-14 02:47:24.648977 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 02:47:24.648994 | orchestrator | Wednesday 14 May 2025 02:45:49 +0000 (0:00:03.088) 0:01:39.153 ********* 2025-05-14 02:47:24.649004 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:24.649014 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:24.649023 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:24.649033 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:47:24.649042 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:47:24.649051 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:47:24.649061 | orchestrator | 2025-05-14 02:47:24.649070 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-14 02:47:24.649080 | orchestrator | Wednesday 14 May 2025 02:45:49 +0000 (0:00:00.650) 0:01:39.803 ********* 2025-05-14 02:47:24.649090 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:24.649099 | orchestrator | 2025-05-14 02:47:24.649109 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-14 02:47:24.649118 | orchestrator | Wednesday 14 May 2025 02:45:52 +0000 (0:00:02.701) 0:01:42.505 ********* 2025-05-14 02:47:24.649128 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:24.649137 | orchestrator | 2025-05-14 02:47:24.649147 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-14 02:47:24.649156 | orchestrator | Wednesday 14 May 2025 02:45:55 +0000 (0:00:02.788) 0:01:45.294 ********* 2025-05-14 02:47:24.649166 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:24.649176 | orchestrator | 2025-05-14 02:47:24.649185 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:47:24.649195 | orchestrator | Wednesday 14 May 2025 02:46:16 +0000 (0:00:21.473) 0:02:06.768 ********* 2025-05-14 02:47:24.649204 | orchestrator | 2025-05-14 02:47:24.649214 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:47:24.649223 | orchestrator | Wednesday 14 May 2025 02:46:16 +0000 (0:00:00.054) 0:02:06.822 ********* 2025-05-14 02:47:24.649233 | orchestrator | 2025-05-14 02:47:24.649242 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:47:24.649252 | orchestrator | Wednesday 14 May 2025 02:46:17 +0000 (0:00:00.163) 0:02:06.985 ********* 2025-05-14 02:47:24.649261 | orchestrator | 2025-05-14 02:47:24.649271 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:47:24.649281 | orchestrator | Wednesday 14 May 2025 02:46:17 +0000 (0:00:00.057) 0:02:07.043 ********* 2025-05-14 02:47:24.649290 | orchestrator | 2025-05-14 02:47:24.649483 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:47:24.649497 | orchestrator | Wednesday 14 May 2025 02:46:17 +0000 (0:00:00.054) 0:02:07.097 ********* 2025-05-14 02:47:24.649505 | orchestrator | 2025-05-14 02:47:24.649513 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:47:24.649521 | orchestrator | Wednesday 14 May 2025 02:46:17 +0000 (0:00:00.053) 0:02:07.151 ********* 2025-05-14 02:47:24.649529 | orchestrator | 2025-05-14 02:47:24.649537 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-14 02:47:24.649619 | orchestrator | Wednesday 14 May 2025 02:46:17 +0000 (0:00:00.182) 0:02:07.334 ********* 2025-05-14 02:47:24.649629 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:24.649637 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:47:24.649645 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:47:24.649653 | orchestrator | 2025-05-14 02:47:24.649660 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-14 02:47:24.649668 | orchestrator | Wednesday 14 May 2025 02:46:41 +0000 (0:00:23.678) 0:02:31.013 ********* 2025-05-14 02:47:24.649676 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:24.649684 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:47:24.649692 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:47:24.649700 | orchestrator | 2025-05-14 02:47:24.649707 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-14 02:47:24.649715 | orchestrator | Wednesday 14 May 2025 02:46:51 +0000 (0:00:10.319) 0:02:41.332 ********* 2025-05-14 02:47:24.649732 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:47:24.649740 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:47:24.649748 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:47:24.649755 | orchestrator | 2025-05-14 02:47:24.649763 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-14 02:47:24.649771 | orchestrator | Wednesday 14 May 2025 02:47:14 +0000 (0:00:23.247) 0:03:04.579 ********* 2025-05-14 02:47:24.649779 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:47:24.649787 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:47:24.649803 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:47:24.649811 | orchestrator | 2025-05-14 02:47:24.649819 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-14 02:47:24.649827 | orchestrator | Wednesday 14 May 2025 02:47:21 +0000 (0:00:06.710) 0:03:11.290 ********* 2025-05-14 02:47:24.649835 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:24.649842 | orchestrator | 2025-05-14 02:47:24.649850 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:47:24.649859 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-14 02:47:24.649867 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 02:47:24.649875 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 02:47:24.649883 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:47:24.649891 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:47:24.649914 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:47:24.649931 | orchestrator | 2025-05-14 02:47:24.649940 | orchestrator | 2025-05-14 02:47:24.649948 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:47:24.649956 | orchestrator | Wednesday 14 May 2025 02:47:21 +0000 (0:00:00.577) 0:03:11.868 ********* 2025-05-14 02:47:24.649964 | orchestrator | =============================================================================== 2025-05-14 02:47:24.649972 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.68s 2025-05-14 02:47:24.649980 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.25s 2025-05-14 02:47:24.649988 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.47s 2025-05-14 02:47:24.649995 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.26s 2025-05-14 02:47:24.650003 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.32s 2025-05-14 02:47:24.650011 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.03s 2025-05-14 02:47:24.650048 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.86s 2025-05-14 02:47:24.650056 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.71s 2025-05-14 02:47:24.650064 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.68s 2025-05-14 02:47:24.650072 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 4.33s 2025-05-14 02:47:24.650080 | orchestrator | cinder : include_tasks -------------------------------------------------- 4.21s 2025-05-14 02:47:24.650088 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.12s 2025-05-14 02:47:24.650096 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS certificate --- 3.74s 2025-05-14 02:47:24.650104 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.67s 2025-05-14 02:47:24.650124 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.63s 2025-05-14 02:47:24.650137 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.63s 2025-05-14 02:47:24.650148 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.63s 2025-05-14 02:47:24.650163 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.59s 2025-05-14 02:47:24.650174 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.59s 2025-05-14 02:47:24.650187 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 3.34s 2025-05-14 02:47:24.650209 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:24.650223 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:24.650236 | orchestrator | 2025-05-14 02:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:27.708783 | orchestrator | 2025-05-14 02:47:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:27.710250 | orchestrator | 2025-05-14 02:47:27 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:27.711609 | orchestrator | 2025-05-14 02:47:27 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state STARTED 2025-05-14 02:47:27.712847 | orchestrator | 2025-05-14 02:47:27 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:27.719414 | orchestrator | 2025-05-14 02:47:27 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:27.719495 | orchestrator | 2025-05-14 02:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:30.766812 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:30.768538 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:30.768833 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task c7f05b37-433e-44d0-ab86-82215ccd23f4 is in state SUCCESS 2025-05-14 02:47:30.771246 | orchestrator | 2025-05-14 02:47:30.771354 | orchestrator | 2025-05-14 02:47:30.771364 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:47:30.771374 | orchestrator | 2025-05-14 02:47:30.771381 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:47:30.771388 | orchestrator | Wednesday 14 May 2025 02:43:58 +0000 (0:00:00.261) 0:00:00.261 ********* 2025-05-14 02:47:30.771394 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:47:30.771403 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:47:30.771409 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:47:30.771415 | orchestrator | 2025-05-14 02:47:30.771422 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:47:30.771429 | orchestrator | Wednesday 14 May 2025 02:43:58 +0000 (0:00:00.350) 0:00:00.612 ********* 2025-05-14 02:47:30.771436 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-14 02:47:30.771459 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-14 02:47:30.771467 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-14 02:47:30.771474 | orchestrator | 2025-05-14 02:47:30.771481 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-14 02:47:30.771488 | orchestrator | 2025-05-14 02:47:30.771496 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 02:47:30.771503 | orchestrator | Wednesday 14 May 2025 02:43:58 +0000 (0:00:00.246) 0:00:00.859 ********* 2025-05-14 02:47:30.771511 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:47:30.771542 | orchestrator | 2025-05-14 02:47:30.771549 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-14 02:47:30.771643 | orchestrator | Wednesday 14 May 2025 02:43:59 +0000 (0:00:00.667) 0:00:01.526 ********* 2025-05-14 02:47:30.771679 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-14 02:47:30.771686 | orchestrator | 2025-05-14 02:47:30.771693 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-14 02:47:30.771700 | orchestrator | Wednesday 14 May 2025 02:44:03 +0000 (0:00:03.560) 0:00:05.087 ********* 2025-05-14 02:47:30.771707 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-14 02:47:30.771715 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-14 02:47:30.771722 | orchestrator | 2025-05-14 02:47:30.771765 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-14 02:47:30.771773 | orchestrator | Wednesday 14 May 2025 02:44:09 +0000 (0:00:06.808) 0:00:11.895 ********* 2025-05-14 02:47:30.771779 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:47:30.771787 | orchestrator | 2025-05-14 02:47:30.771793 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-14 02:47:30.771816 | orchestrator | Wednesday 14 May 2025 02:44:14 +0000 (0:00:04.185) 0:00:16.081 ********* 2025-05-14 02:47:30.771822 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:47:30.771829 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-14 02:47:30.772061 | orchestrator | 2025-05-14 02:47:30.772071 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-14 02:47:30.772077 | orchestrator | Wednesday 14 May 2025 02:44:18 +0000 (0:00:04.206) 0:00:20.287 ********* 2025-05-14 02:47:30.772084 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:47:30.772091 | orchestrator | 2025-05-14 02:47:30.772098 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-14 02:47:30.772105 | orchestrator | Wednesday 14 May 2025 02:44:21 +0000 (0:00:03.751) 0:00:24.039 ********* 2025-05-14 02:47:30.772112 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-14 02:47:30.772120 | orchestrator | 2025-05-14 02:47:30.772126 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-14 02:47:30.772133 | orchestrator | Wednesday 14 May 2025 02:44:26 +0000 (0:00:04.468) 0:00:28.508 ********* 2025-05-14 02:47:30.772173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.772196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.772207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:47:30.772223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:47:30.772235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.772249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:47:30.772260 | orchestrator | 2025-05-14 02:47:30.772266 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 02:47:30.772271 | orchestrator | Wednesday 14 May 2025 02:44:32 +0000 (0:00:06.277) 0:00:34.786 ********* 2025-05-14 02:47:30.772279 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:47:30.772285 | orchestrator | 2025-05-14 02:47:30.772337 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-14 02:47:30.772344 | orchestrator | Wednesday 14 May 2025 02:44:33 +0000 (0:00:00.972) 0:00:35.758 ********* 2025-05-14 02:47:30.772350 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:47:30.772356 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:30.772362 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:47:30.772368 | orchestrator | 2025-05-14 02:47:30.772374 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-14 02:47:30.772380 | orchestrator | Wednesday 14 May 2025 02:44:42 +0000 (0:00:08.650) 0:00:44.409 ********* 2025-05-14 02:47:30.772386 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:47:30.772394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:47:30.772400 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:47:30.772406 | orchestrator | 2025-05-14 02:47:30.772412 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-14 02:47:30.772417 | orchestrator | Wednesday 14 May 2025 02:44:44 +0000 (0:00:02.053) 0:00:46.463 ********* 2025-05-14 02:47:30.772445 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:47:30.772452 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:47:30.772458 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:47:30.772465 | orchestrator | 2025-05-14 02:47:30.772471 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-14 02:47:30.772477 | orchestrator | Wednesday 14 May 2025 02:44:45 +0000 (0:00:01.408) 0:00:47.871 ********* 2025-05-14 02:47:30.772482 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:47:30.772488 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:47:30.772494 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:47:30.772500 | orchestrator | 2025-05-14 02:47:30.772506 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-14 02:47:30.772518 | orchestrator | Wednesday 14 May 2025 02:44:46 +0000 (0:00:00.809) 0:00:48.681 ********* 2025-05-14 02:47:30.772524 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.772531 | orchestrator | 2025-05-14 02:47:30.772537 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-14 02:47:30.772543 | orchestrator | Wednesday 14 May 2025 02:44:46 +0000 (0:00:00.110) 0:00:48.791 ********* 2025-05-14 02:47:30.772549 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.772554 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.772560 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.772566 | orchestrator | 2025-05-14 02:47:30.772577 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 02:47:30.772583 | orchestrator | Wednesday 14 May 2025 02:44:47 +0000 (0:00:00.530) 0:00:49.322 ********* 2025-05-14 02:47:30.772589 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:47:30.772595 | orchestrator | 2025-05-14 02:47:30.772601 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-14 02:47:30.772608 | orchestrator | Wednesday 14 May 2025 02:44:48 +0000 (0:00:00.874) 0:00:50.197 ********* 2025-05-14 02:47:30.772623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.772631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.772651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.772658 | orchestrator | 2025-05-14 02:47:30.772664 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-14 02:47:30.772670 | orchestrator | Wednesday 14 May 2025 02:44:53 +0000 (0:00:05.138) 0:00:55.335 ********* 2025-05-14 02:47:30.772677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:47:30.772688 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.772704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:47:30.772710 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.772717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:47:30.772723 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.772730 | orchestrator | 2025-05-14 02:47:30.772749 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-14 02:47:30.772759 | orchestrator | Wednesday 14 May 2025 02:44:57 +0000 (0:00:04.687) 0:01:00.023 ********* 2025-05-14 02:47:30.772773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:47:30.772782 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.772788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:47:30.772795 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.772805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:47:30.772816 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.772822 | orchestrator | 2025-05-14 02:47:30.772828 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-14 02:47:30.772834 | orchestrator | Wednesday 14 May 2025 02:45:03 +0000 (0:00:05.375) 0:01:05.398 ********* 2025-05-14 02:47:30.772840 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.772846 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.772853 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.772859 | orchestrator | 2025-05-14 02:47:30.772870 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-14 02:47:30.772876 | orchestrator | Wednesday 14 May 2025 02:45:13 +0000 (0:00:09.981) 0:01:15.380 ********* 2025-05-14 02:47:30.772882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.772893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:47:30.772911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.772918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:47:30.772937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.772944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:47:30.772956 | orchestrator | 2025-05-14 02:47:30.772963 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-14 02:47:30.772969 | orchestrator | Wednesday 14 May 2025 02:45:21 +0000 (0:00:08.505) 0:01:23.885 ********* 2025-05-14 02:47:30.772974 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:30.772982 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:47:30.772993 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:47:30.773001 | orchestrator | 2025-05-14 02:47:30.773009 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-14 02:47:30.773015 | orchestrator | Wednesday 14 May 2025 02:45:37 +0000 (0:00:16.132) 0:01:40.018 ********* 2025-05-14 02:47:30.773022 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.773029 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.773036 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.773042 | orchestrator | 2025-05-14 02:47:30.773049 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-14 02:47:30.773056 | orchestrator | Wednesday 14 May 2025 02:45:48 +0000 (0:00:10.164) 0:01:50.182 ********* 2025-05-14 02:47:30.773063 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.773070 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.773076 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.773082 | orchestrator | 2025-05-14 02:47:30.773088 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-14 02:47:30.773097 | orchestrator | Wednesday 14 May 2025 02:45:53 +0000 (0:00:05.251) 0:01:55.434 ********* 2025-05-14 02:47:30.773104 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.773110 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.773117 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.773123 | orchestrator | 2025-05-14 02:47:30.773129 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-14 02:47:30.773136 | orchestrator | Wednesday 14 May 2025 02:46:04 +0000 (0:00:11.130) 0:02:06.565 ********* 2025-05-14 02:47:30.773143 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.773153 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.773160 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.773166 | orchestrator | 2025-05-14 02:47:30.773173 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-14 02:47:30.773179 | orchestrator | Wednesday 14 May 2025 02:46:10 +0000 (0:00:05.573) 0:02:12.139 ********* 2025-05-14 02:47:30.773186 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.773192 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.773199 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.773206 | orchestrator | 2025-05-14 02:47:30.773212 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-14 02:47:30.773218 | orchestrator | Wednesday 14 May 2025 02:46:10 +0000 (0:00:00.251) 0:02:12.390 ********* 2025-05-14 02:47:30.773225 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-14 02:47:30.773236 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.773243 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-14 02:47:30.773250 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.773256 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-14 02:47:30.773262 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.773269 | orchestrator | 2025-05-14 02:47:30.773275 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-14 02:47:30.773282 | orchestrator | Wednesday 14 May 2025 02:46:13 +0000 (0:00:03.123) 0:02:15.514 ********* 2025-05-14 02:47:30.773288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.773378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.773392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:47:30.773405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:47:30.773413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:47:30.773430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:47:30.773438 | orchestrator | 2025-05-14 02:47:30.773444 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 02:47:30.773450 | orchestrator | Wednesday 14 May 2025 02:46:17 +0000 (0:00:03.899) 0:02:19.413 ********* 2025-05-14 02:47:30.773456 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:47:30.773463 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:47:30.773470 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:47:30.773476 | orchestrator | 2025-05-14 02:47:30.773486 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-14 02:47:30.773497 | orchestrator | Wednesday 14 May 2025 02:46:17 +0000 (0:00:00.502) 0:02:19.916 ********* 2025-05-14 02:47:30.773503 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:30.773509 | orchestrator | 2025-05-14 02:47:30.773515 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-14 02:47:30.773521 | orchestrator | Wednesday 14 May 2025 02:46:20 +0000 (0:00:02.368) 0:02:22.284 ********* 2025-05-14 02:47:30.773527 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:30.773532 | orchestrator | 2025-05-14 02:47:30.773538 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-14 02:47:30.773544 | orchestrator | Wednesday 14 May 2025 02:46:22 +0000 (0:00:02.359) 0:02:24.643 ********* 2025-05-14 02:47:30.773550 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:30.773556 | orchestrator | 2025-05-14 02:47:30.773563 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-14 02:47:30.773569 | orchestrator | Wednesday 14 May 2025 02:46:24 +0000 (0:00:02.260) 0:02:26.904 ********* 2025-05-14 02:47:30.773575 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:30.773581 | orchestrator | 2025-05-14 02:47:30.773587 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-14 02:47:30.773593 | orchestrator | Wednesday 14 May 2025 02:46:50 +0000 (0:00:25.896) 0:02:52.800 ********* 2025-05-14 02:47:30.773599 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:30.773605 | orchestrator | 2025-05-14 02:47:30.773611 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-14 02:47:30.773617 | orchestrator | Wednesday 14 May 2025 02:46:52 +0000 (0:00:02.170) 0:02:54.971 ********* 2025-05-14 02:47:30.773623 | orchestrator | 2025-05-14 02:47:30.773630 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-14 02:47:30.773636 | orchestrator | Wednesday 14 May 2025 02:46:52 +0000 (0:00:00.063) 0:02:55.034 ********* 2025-05-14 02:47:30.773642 | orchestrator | 2025-05-14 02:47:30.773649 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-14 02:47:30.773655 | orchestrator | Wednesday 14 May 2025 02:46:53 +0000 (0:00:00.058) 0:02:55.093 ********* 2025-05-14 02:47:30.773661 | orchestrator | 2025-05-14 02:47:30.773668 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-14 02:47:30.773674 | orchestrator | Wednesday 14 May 2025 02:46:53 +0000 (0:00:00.167) 0:02:55.261 ********* 2025-05-14 02:47:30.773682 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:47:30.773689 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:47:30.773695 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:47:30.773702 | orchestrator | 2025-05-14 02:47:30.773709 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:47:30.773717 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-14 02:47:30.773726 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-14 02:47:30.773733 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-14 02:47:30.773740 | orchestrator | 2025-05-14 02:47:30.773747 | orchestrator | 2025-05-14 02:47:30.773754 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:47:30.773760 | orchestrator | Wednesday 14 May 2025 02:47:27 +0000 (0:00:34.343) 0:03:29.604 ********* 2025-05-14 02:47:30.773766 | orchestrator | =============================================================================== 2025-05-14 02:47:30.773773 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.34s 2025-05-14 02:47:30.773780 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.90s 2025-05-14 02:47:30.773786 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 16.13s 2025-05-14 02:47:30.773798 | orchestrator | glance : Copying over glance-image-import.conf ------------------------- 11.13s 2025-05-14 02:47:30.773805 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 10.16s 2025-05-14 02:47:30.773812 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 9.98s 2025-05-14 02:47:30.773818 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 8.65s 2025-05-14 02:47:30.773824 | orchestrator | glance : Copying over config.json files for services -------------------- 8.51s 2025-05-14 02:47:30.773831 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.81s 2025-05-14 02:47:30.773837 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.28s 2025-05-14 02:47:30.773844 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.57s 2025-05-14 02:47:30.773851 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.38s 2025-05-14 02:47:30.773857 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.25s 2025-05-14 02:47:30.773864 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.14s 2025-05-14 02:47:30.773874 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.69s 2025-05-14 02:47:30.773882 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.47s 2025-05-14 02:47:30.773888 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.21s 2025-05-14 02:47:30.773895 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.19s 2025-05-14 02:47:30.773901 | orchestrator | glance : Check glance containers ---------------------------------------- 3.90s 2025-05-14 02:47:30.773913 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.75s 2025-05-14 02:47:30.774882 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:30.776930 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:47:30.778435 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:30.778508 | orchestrator | 2025-05-14 02:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:33.824204 | orchestrator | 2025-05-14 02:47:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:33.824367 | orchestrator | 2025-05-14 02:47:33 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:33.825574 | orchestrator | 2025-05-14 02:47:33 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:33.826980 | orchestrator | 2025-05-14 02:47:33 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:47:33.828254 | orchestrator | 2025-05-14 02:47:33 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:33.828527 | orchestrator | 2025-05-14 02:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:36.879083 | orchestrator | 2025-05-14 02:47:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:36.881016 | orchestrator | 2025-05-14 02:47:36 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:36.883524 | orchestrator | 2025-05-14 02:47:36 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:36.886440 | orchestrator | 2025-05-14 02:47:36 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:47:36.888540 | orchestrator | 2025-05-14 02:47:36 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:36.888970 | orchestrator | 2025-05-14 02:47:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:39.937439 | orchestrator | 2025-05-14 02:47:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:39.939163 | orchestrator | 2025-05-14 02:47:39 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:39.940698 | orchestrator | 2025-05-14 02:47:39 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:39.943197 | orchestrator | 2025-05-14 02:47:39 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:47:39.944750 | orchestrator | 2025-05-14 02:47:39 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:39.944803 | orchestrator | 2025-05-14 02:47:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:42.987914 | orchestrator | 2025-05-14 02:47:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:42.989905 | orchestrator | 2025-05-14 02:47:42 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:42.991121 | orchestrator | 2025-05-14 02:47:42 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:42.992793 | orchestrator | 2025-05-14 02:47:42 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:47:42.994263 | orchestrator | 2025-05-14 02:47:42 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:42.994351 | orchestrator | 2025-05-14 02:47:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:46.039981 | orchestrator | 2025-05-14 02:47:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:46.041176 | orchestrator | 2025-05-14 02:47:46 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:46.042600 | orchestrator | 2025-05-14 02:47:46 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:46.045090 | orchestrator | 2025-05-14 02:47:46 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:47:46.046153 | orchestrator | 2025-05-14 02:47:46 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:46.046175 | orchestrator | 2025-05-14 02:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:49.095250 | orchestrator | 2025-05-14 02:47:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:49.096065 | orchestrator | 2025-05-14 02:47:49 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:49.099248 | orchestrator | 2025-05-14 02:47:49 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:49.100015 | orchestrator | 2025-05-14 02:47:49 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:47:49.101371 | orchestrator | 2025-05-14 02:47:49 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:49.101417 | orchestrator | 2025-05-14 02:47:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:52.143499 | orchestrator | 2025-05-14 02:47:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:52.144862 | orchestrator | 2025-05-14 02:47:52 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:52.147681 | orchestrator | 2025-05-14 02:47:52 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:52.149173 | orchestrator | 2025-05-14 02:47:52 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:47:52.150732 | orchestrator | 2025-05-14 02:47:52 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:52.150837 | orchestrator | 2025-05-14 02:47:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:55.191148 | orchestrator | 2025-05-14 02:47:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:55.193812 | orchestrator | 2025-05-14 02:47:55 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:55.194343 | orchestrator | 2025-05-14 02:47:55 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:55.197319 | orchestrator | 2025-05-14 02:47:55 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:47:55.198187 | orchestrator | 2025-05-14 02:47:55 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:55.198230 | orchestrator | 2025-05-14 02:47:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:58.245237 | orchestrator | 2025-05-14 02:47:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:47:58.246783 | orchestrator | 2025-05-14 02:47:58 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:47:58.249363 | orchestrator | 2025-05-14 02:47:58 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:47:58.251079 | orchestrator | 2025-05-14 02:47:58 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:47:58.252348 | orchestrator | 2025-05-14 02:47:58 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:47:58.252387 | orchestrator | 2025-05-14 02:47:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:01.293382 | orchestrator | 2025-05-14 02:48:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:01.296546 | orchestrator | 2025-05-14 02:48:01 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:48:01.296598 | orchestrator | 2025-05-14 02:48:01 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:01.296614 | orchestrator | 2025-05-14 02:48:01 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:01.296967 | orchestrator | 2025-05-14 02:48:01 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:01.297150 | orchestrator | 2025-05-14 02:48:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:04.344349 | orchestrator | 2025-05-14 02:48:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:04.346819 | orchestrator | 2025-05-14 02:48:04 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:48:04.348818 | orchestrator | 2025-05-14 02:48:04 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:04.351229 | orchestrator | 2025-05-14 02:48:04 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:04.353977 | orchestrator | 2025-05-14 02:48:04 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:04.354076 | orchestrator | 2025-05-14 02:48:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:07.413601 | orchestrator | 2025-05-14 02:48:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:07.415558 | orchestrator | 2025-05-14 02:48:07 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:48:07.415625 | orchestrator | 2025-05-14 02:48:07 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:07.415786 | orchestrator | 2025-05-14 02:48:07 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:07.416861 | orchestrator | 2025-05-14 02:48:07 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:07.416907 | orchestrator | 2025-05-14 02:48:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:10.462376 | orchestrator | 2025-05-14 02:48:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:10.463597 | orchestrator | 2025-05-14 02:48:10 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:48:10.464880 | orchestrator | 2025-05-14 02:48:10 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:10.466657 | orchestrator | 2025-05-14 02:48:10 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:10.467836 | orchestrator | 2025-05-14 02:48:10 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:10.467874 | orchestrator | 2025-05-14 02:48:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:13.518942 | orchestrator | 2025-05-14 02:48:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:13.521126 | orchestrator | 2025-05-14 02:48:13 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:48:13.525243 | orchestrator | 2025-05-14 02:48:13 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:13.527499 | orchestrator | 2025-05-14 02:48:13 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:13.528937 | orchestrator | 2025-05-14 02:48:13 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:13.528982 | orchestrator | 2025-05-14 02:48:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:16.573714 | orchestrator | 2025-05-14 02:48:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:16.575031 | orchestrator | 2025-05-14 02:48:16 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:48:16.576907 | orchestrator | 2025-05-14 02:48:16 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:16.579764 | orchestrator | 2025-05-14 02:48:16 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:16.581841 | orchestrator | 2025-05-14 02:48:16 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:16.581924 | orchestrator | 2025-05-14 02:48:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:19.625008 | orchestrator | 2025-05-14 02:48:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:19.626794 | orchestrator | 2025-05-14 02:48:19 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state STARTED 2025-05-14 02:48:19.627807 | orchestrator | 2025-05-14 02:48:19 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:19.629031 | orchestrator | 2025-05-14 02:48:19 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:19.629786 | orchestrator | 2025-05-14 02:48:19 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:19.631469 | orchestrator | 2025-05-14 02:48:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:22.690352 | orchestrator | 2025-05-14 02:48:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:22.691003 | orchestrator | 2025-05-14 02:48:22 | INFO  | Task c9fa9f70-d748-4a70-bae1-0a548a3bde51 is in state SUCCESS 2025-05-14 02:48:22.693206 | orchestrator | 2025-05-14 02:48:22 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:22.694604 | orchestrator | 2025-05-14 02:48:22 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:22.696171 | orchestrator | 2025-05-14 02:48:22 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:22.696191 | orchestrator | 2025-05-14 02:48:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:25.753191 | orchestrator | 2025-05-14 02:48:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:25.756577 | orchestrator | 2025-05-14 02:48:25 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:25.758827 | orchestrator | 2025-05-14 02:48:25 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:25.760393 | orchestrator | 2025-05-14 02:48:25 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:25.760425 | orchestrator | 2025-05-14 02:48:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:28.807976 | orchestrator | 2025-05-14 02:48:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:28.809430 | orchestrator | 2025-05-14 02:48:28 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:28.810387 | orchestrator | 2025-05-14 02:48:28 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:28.811630 | orchestrator | 2025-05-14 02:48:28 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:28.811818 | orchestrator | 2025-05-14 02:48:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:31.853753 | orchestrator | 2025-05-14 02:48:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:31.854998 | orchestrator | 2025-05-14 02:48:31 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:31.857094 | orchestrator | 2025-05-14 02:48:31 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:31.859024 | orchestrator | 2025-05-14 02:48:31 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:31.859177 | orchestrator | 2025-05-14 02:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:34.903326 | orchestrator | 2025-05-14 02:48:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:34.904003 | orchestrator | 2025-05-14 02:48:34 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:34.905854 | orchestrator | 2025-05-14 02:48:34 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:34.907574 | orchestrator | 2025-05-14 02:48:34 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:34.907627 | orchestrator | 2025-05-14 02:48:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:37.955525 | orchestrator | 2025-05-14 02:48:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:37.957843 | orchestrator | 2025-05-14 02:48:37 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:37.959983 | orchestrator | 2025-05-14 02:48:37 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:37.961661 | orchestrator | 2025-05-14 02:48:37 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:37.961941 | orchestrator | 2025-05-14 02:48:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:41.020303 | orchestrator | 2025-05-14 02:48:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:41.021840 | orchestrator | 2025-05-14 02:48:41 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:41.023307 | orchestrator | 2025-05-14 02:48:41 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:41.025096 | orchestrator | 2025-05-14 02:48:41 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:41.025131 | orchestrator | 2025-05-14 02:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:44.076100 | orchestrator | 2025-05-14 02:48:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:44.076767 | orchestrator | 2025-05-14 02:48:44 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:44.077999 | orchestrator | 2025-05-14 02:48:44 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:44.079299 | orchestrator | 2025-05-14 02:48:44 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:44.079354 | orchestrator | 2025-05-14 02:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:47.130398 | orchestrator | 2025-05-14 02:48:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:47.130467 | orchestrator | 2025-05-14 02:48:47 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:47.132138 | orchestrator | 2025-05-14 02:48:47 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:47.133641 | orchestrator | 2025-05-14 02:48:47 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:47.133804 | orchestrator | 2025-05-14 02:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:50.192046 | orchestrator | 2025-05-14 02:48:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:50.194584 | orchestrator | 2025-05-14 02:48:50 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:50.195998 | orchestrator | 2025-05-14 02:48:50 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:50.198118 | orchestrator | 2025-05-14 02:48:50 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:50.198144 | orchestrator | 2025-05-14 02:48:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:53.250663 | orchestrator | 2025-05-14 02:48:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:53.250857 | orchestrator | 2025-05-14 02:48:53 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:53.253158 | orchestrator | 2025-05-14 02:48:53 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:53.255480 | orchestrator | 2025-05-14 02:48:53 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:53.255542 | orchestrator | 2025-05-14 02:48:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:56.312449 | orchestrator | 2025-05-14 02:48:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:56.316375 | orchestrator | 2025-05-14 02:48:56 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:56.319003 | orchestrator | 2025-05-14 02:48:56 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:56.320776 | orchestrator | 2025-05-14 02:48:56 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:56.320978 | orchestrator | 2025-05-14 02:48:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:59.379187 | orchestrator | 2025-05-14 02:48:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:48:59.379829 | orchestrator | 2025-05-14 02:48:59 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:48:59.384642 | orchestrator | 2025-05-14 02:48:59 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:48:59.394901 | orchestrator | 2025-05-14 02:48:59 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:48:59.394987 | orchestrator | 2025-05-14 02:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:02.439858 | orchestrator | 2025-05-14 02:49:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:02.440763 | orchestrator | 2025-05-14 02:49:02 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:02.442629 | orchestrator | 2025-05-14 02:49:02 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:49:02.443775 | orchestrator | 2025-05-14 02:49:02 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:49:02.443793 | orchestrator | 2025-05-14 02:49:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:05.489983 | orchestrator | 2025-05-14 02:49:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:05.491353 | orchestrator | 2025-05-14 02:49:05 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:05.493143 | orchestrator | 2025-05-14 02:49:05 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:49:05.494286 | orchestrator | 2025-05-14 02:49:05 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:49:05.494324 | orchestrator | 2025-05-14 02:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:08.551325 | orchestrator | 2025-05-14 02:49:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:08.552044 | orchestrator | 2025-05-14 02:49:08 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:08.555677 | orchestrator | 2025-05-14 02:49:08 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:49:08.556342 | orchestrator | 2025-05-14 02:49:08 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:49:08.556377 | orchestrator | 2025-05-14 02:49:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:11.614857 | orchestrator | 2025-05-14 02:49:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:11.616544 | orchestrator | 2025-05-14 02:49:11 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:11.619590 | orchestrator | 2025-05-14 02:49:11 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:49:11.621473 | orchestrator | 2025-05-14 02:49:11 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:49:11.621632 | orchestrator | 2025-05-14 02:49:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:14.670830 | orchestrator | 2025-05-14 02:49:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:14.674516 | orchestrator | 2025-05-14 02:49:14 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:14.676038 | orchestrator | 2025-05-14 02:49:14 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:49:14.677689 | orchestrator | 2025-05-14 02:49:14 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:49:14.677769 | orchestrator | 2025-05-14 02:49:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:17.729049 | orchestrator | 2025-05-14 02:49:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:17.731363 | orchestrator | 2025-05-14 02:49:17 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:17.731434 | orchestrator | 2025-05-14 02:49:17 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:49:17.731514 | orchestrator | 2025-05-14 02:49:17 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:49:17.731527 | orchestrator | 2025-05-14 02:49:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:20.782168 | orchestrator | 2025-05-14 02:49:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:20.786311 | orchestrator | 2025-05-14 02:49:20 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:20.788884 | orchestrator | 2025-05-14 02:49:20 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state STARTED 2025-05-14 02:49:20.790971 | orchestrator | 2025-05-14 02:49:20 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:49:20.791020 | orchestrator | 2025-05-14 02:49:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:23.847980 | orchestrator | 2025-05-14 02:49:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:23.849571 | orchestrator | 2025-05-14 02:49:23 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:23.851856 | orchestrator | 2025-05-14 02:49:23 | INFO  | Task 0a95f9cc-a026-46b5-bfb8-cae4b9494d8c is in state SUCCESS 2025-05-14 02:49:23.854565 | orchestrator | 2025-05-14 02:49:23.854637 | orchestrator | 2025-05-14 02:49:23.854650 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:49:23.854663 | orchestrator | 2025-05-14 02:49:23.854675 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:49:23.854701 | orchestrator | Wednesday 14 May 2025 02:47:25 +0000 (0:00:00.313) 0:00:00.313 ********* 2025-05-14 02:49:23.854723 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:49:23.854737 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:49:23.854748 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:49:23.854759 | orchestrator | 2025-05-14 02:49:23.854770 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:49:23.854781 | orchestrator | Wednesday 14 May 2025 02:47:25 +0000 (0:00:00.418) 0:00:00.731 ********* 2025-05-14 02:49:23.854808 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-14 02:49:23.854820 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-14 02:49:23.854831 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-14 02:49:23.854841 | orchestrator | 2025-05-14 02:49:23.854853 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-14 02:49:23.854863 | orchestrator | 2025-05-14 02:49:23.854874 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 02:49:23.854909 | orchestrator | Wednesday 14 May 2025 02:47:25 +0000 (0:00:00.331) 0:00:01.063 ********* 2025-05-14 02:49:23.854920 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:49:23.854932 | orchestrator | 2025-05-14 02:49:23.855087 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-14 02:49:23.855101 | orchestrator | Wednesday 14 May 2025 02:47:26 +0000 (0:00:00.798) 0:00:01.862 ********* 2025-05-14 02:49:23.855115 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-14 02:49:23.855127 | orchestrator | 2025-05-14 02:49:23.855139 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-14 02:49:23.855151 | orchestrator | Wednesday 14 May 2025 02:47:30 +0000 (0:00:03.824) 0:00:05.686 ********* 2025-05-14 02:49:23.855163 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-14 02:49:23.855175 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-14 02:49:23.855187 | orchestrator | 2025-05-14 02:49:23.855199 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-14 02:49:23.855237 | orchestrator | Wednesday 14 May 2025 02:47:37 +0000 (0:00:06.749) 0:00:12.436 ********* 2025-05-14 02:49:23.855251 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:49:23.855264 | orchestrator | 2025-05-14 02:49:23.855275 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-14 02:49:23.855286 | orchestrator | Wednesday 14 May 2025 02:47:40 +0000 (0:00:03.533) 0:00:15.969 ********* 2025-05-14 02:49:23.855297 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:49:23.855308 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-14 02:49:23.855319 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-14 02:49:23.855330 | orchestrator | 2025-05-14 02:49:23.855341 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-14 02:49:23.855352 | orchestrator | Wednesday 14 May 2025 02:47:49 +0000 (0:00:08.412) 0:00:24.382 ********* 2025-05-14 02:49:23.855363 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:49:23.855373 | orchestrator | 2025-05-14 02:49:23.855384 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-14 02:49:23.855395 | orchestrator | Wednesday 14 May 2025 02:47:52 +0000 (0:00:03.359) 0:00:27.742 ********* 2025-05-14 02:49:23.855405 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-14 02:49:23.855416 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-14 02:49:23.855427 | orchestrator | 2025-05-14 02:49:23.855437 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-14 02:49:23.855448 | orchestrator | Wednesday 14 May 2025 02:48:00 +0000 (0:00:08.074) 0:00:35.817 ********* 2025-05-14 02:49:23.855459 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-14 02:49:23.855469 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-14 02:49:23.855480 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-14 02:49:23.855490 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-14 02:49:23.855501 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-14 02:49:23.855511 | orchestrator | 2025-05-14 02:49:23.855522 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 02:49:23.855533 | orchestrator | Wednesday 14 May 2025 02:48:17 +0000 (0:00:17.142) 0:00:52.959 ********* 2025-05-14 02:49:23.855544 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:49:23.855554 | orchestrator | 2025-05-14 02:49:23.855565 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-14 02:49:23.855587 | orchestrator | Wednesday 14 May 2025 02:48:18 +0000 (0:00:00.765) 0:00:53.725 ********* 2025-05-14 02:49:23.855617 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-14 02:49:23.855632 | orchestrator | 2025-05-14 02:49:23.855643 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:49:23.855656 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-14 02:49:23.855667 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:49:23.855684 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:49:23.855695 | orchestrator | 2025-05-14 02:49:23.855706 | orchestrator | 2025-05-14 02:49:23.855717 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:49:23.855727 | orchestrator | Wednesday 14 May 2025 02:48:22 +0000 (0:00:03.551) 0:00:57.277 ********* 2025-05-14 02:49:23.855738 | orchestrator | =============================================================================== 2025-05-14 02:49:23.855749 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.14s 2025-05-14 02:49:23.855759 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.41s 2025-05-14 02:49:23.855770 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.07s 2025-05-14 02:49:23.855781 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.75s 2025-05-14 02:49:23.855791 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.82s 2025-05-14 02:49:23.855802 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.55s 2025-05-14 02:49:23.855813 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.53s 2025-05-14 02:49:23.855823 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.36s 2025-05-14 02:49:23.855834 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.80s 2025-05-14 02:49:23.855845 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.77s 2025-05-14 02:49:23.855855 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-05-14 02:49:23.855866 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2025-05-14 02:49:23.855876 | orchestrator | 2025-05-14 02:49:23.855887 | orchestrator | 2025-05-14 02:49:23.855898 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:49:23.855908 | orchestrator | 2025-05-14 02:49:23.855919 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:49:23.855930 | orchestrator | Wednesday 14 May 2025 02:47:31 +0000 (0:00:00.313) 0:00:00.313 ********* 2025-05-14 02:49:23.855940 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:49:23.855951 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:49:23.855962 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:49:23.855973 | orchestrator | 2025-05-14 02:49:23.855983 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:49:23.855994 | orchestrator | Wednesday 14 May 2025 02:47:31 +0000 (0:00:00.362) 0:00:00.675 ********* 2025-05-14 02:49:23.856004 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-14 02:49:23.856015 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-14 02:49:23.856026 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-14 02:49:23.856045 | orchestrator | 2025-05-14 02:49:23.856056 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-14 02:49:23.856067 | orchestrator | 2025-05-14 02:49:23.856077 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-14 02:49:23.856088 | orchestrator | Wednesday 14 May 2025 02:47:31 +0000 (0:00:00.337) 0:00:01.013 ********* 2025-05-14 02:49:23.856099 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:49:23.856109 | orchestrator | 2025-05-14 02:49:23.856120 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-14 02:49:23.856131 | orchestrator | Wednesday 14 May 2025 02:47:32 +0000 (0:00:00.754) 0:00:01.767 ********* 2025-05-14 02:49:23.856143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856194 | orchestrator | 2025-05-14 02:49:23.856205 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-14 02:49:23.856312 | orchestrator | Wednesday 14 May 2025 02:47:33 +0000 (0:00:00.819) 0:00:02.587 ********* 2025-05-14 02:49:23.856329 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-14 02:49:23.856340 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-14 02:49:23.856351 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:49:23.856362 | orchestrator | 2025-05-14 02:49:23.856373 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-14 02:49:23.856383 | orchestrator | Wednesday 14 May 2025 02:47:33 +0000 (0:00:00.515) 0:00:03.103 ********* 2025-05-14 02:49:23.856394 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:49:23.856405 | orchestrator | 2025-05-14 02:49:23.856415 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-14 02:49:23.856426 | orchestrator | Wednesday 14 May 2025 02:47:34 +0000 (0:00:00.590) 0:00:03.694 ********* 2025-05-14 02:49:23.856447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856481 | orchestrator | 2025-05-14 02:49:23.856492 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-14 02:49:23.856503 | orchestrator | Wednesday 14 May 2025 02:47:35 +0000 (0:00:01.481) 0:00:05.175 ********* 2025-05-14 02:49:23.856529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:49:23.856542 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:49:23.856553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:49:23.856565 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:49:23.856575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:49:23.856591 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:49:23.856601 | orchestrator | 2025-05-14 02:49:23.856611 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-14 02:49:23.856620 | orchestrator | Wednesday 14 May 2025 02:47:36 +0000 (0:00:00.690) 0:00:05.865 ********* 2025-05-14 02:49:23.856630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:49:23.856640 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:49:23.856651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:49:23.856661 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:49:23.856677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:49:23.856688 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:49:23.856698 | orchestrator | 2025-05-14 02:49:23.856708 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-14 02:49:23.856721 | orchestrator | Wednesday 14 May 2025 02:47:37 +0000 (0:00:00.717) 0:00:06.583 ********* 2025-05-14 02:49:23.856731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856768 | orchestrator | 2025-05-14 02:49:23.856777 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-14 02:49:23.856787 | orchestrator | Wednesday 14 May 2025 02:47:38 +0000 (0:00:01.613) 0:00:08.196 ********* 2025-05-14 02:49:23.856797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.856839 | orchestrator | 2025-05-14 02:49:23.856849 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-14 02:49:23.856870 | orchestrator | Wednesday 14 May 2025 02:47:40 +0000 (0:00:01.609) 0:00:09.805 ********* 2025-05-14 02:49:23.856880 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:49:23.856889 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:49:23.856899 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:49:23.856908 | orchestrator | 2025-05-14 02:49:23.856918 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-14 02:49:23.856927 | orchestrator | Wednesday 14 May 2025 02:47:40 +0000 (0:00:00.271) 0:00:10.076 ********* 2025-05-14 02:49:23.856937 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-14 02:49:23.856947 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-14 02:49:23.856956 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-14 02:49:23.856965 | orchestrator | 2025-05-14 02:49:23.856975 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-14 02:49:23.856984 | orchestrator | Wednesday 14 May 2025 02:47:42 +0000 (0:00:01.374) 0:00:11.451 ********* 2025-05-14 02:49:23.856994 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-14 02:49:23.857004 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-14 02:49:23.857014 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-14 02:49:23.857024 | orchestrator | 2025-05-14 02:49:23.857034 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-14 02:49:23.857043 | orchestrator | Wednesday 14 May 2025 02:47:43 +0000 (0:00:01.390) 0:00:12.842 ********* 2025-05-14 02:49:23.857053 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:49:23.857063 | orchestrator | 2025-05-14 02:49:23.857072 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-14 02:49:23.857082 | orchestrator | Wednesday 14 May 2025 02:47:43 +0000 (0:00:00.450) 0:00:13.292 ********* 2025-05-14 02:49:23.857092 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-14 02:49:23.857101 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-14 02:49:23.857111 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:49:23.857121 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:49:23.857131 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:49:23.857141 | orchestrator | 2025-05-14 02:49:23.857150 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-14 02:49:23.857160 | orchestrator | Wednesday 14 May 2025 02:47:44 +0000 (0:00:00.915) 0:00:14.207 ********* 2025-05-14 02:49:23.857170 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:49:23.857179 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:49:23.857189 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:49:23.857199 | orchestrator | 2025-05-14 02:49:23.857208 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-14 02:49:23.857249 | orchestrator | Wednesday 14 May 2025 02:47:45 +0000 (0:00:00.324) 0:00:14.531 ********* 2025-05-14 02:49:23.857266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090311, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.272604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090311, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.272604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090311, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.272604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090302, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2646039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090302, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2646039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090302, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2646039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090294, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.260604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090294, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.260604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090294, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.260604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090309, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2686038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090309, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2686038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090309, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2686038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090283, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2586038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.857496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090283, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2586038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090283, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2586038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090295, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2626038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090295, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2626038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090295, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2626038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090307, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2676039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090307, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2676039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090307, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2676039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090279, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2566037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090279, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2566037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090279, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2566037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090269, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2536037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090269, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2536037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090269, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2536037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090286, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2586038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090286, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2586038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090286, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2586038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090273, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2556038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090273, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2556038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090273, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2556038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1090305, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.266604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1090305, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.266604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1090305, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.266604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1090287, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.2596037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1090287, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.2596037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1090287, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.2596037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090310, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.270604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090310, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.270604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090310, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.270604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090278, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2566037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090278, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2566037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090278, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2566037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090299, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.263604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090299, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.263604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090299, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.263604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090270, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2546036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090270, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2546036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090270, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2546036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090277, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2566037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090277, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2566037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090277, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2566037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090291, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.260604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090291, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.260604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090291, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.260604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090340, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2916043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090340, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2916043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090340, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2916043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090335, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.281604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090335, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.281604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090335, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.281604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090383, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2966042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090383, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2966042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090383, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2966042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090318, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.272604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.858990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090318, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.272604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090318, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.272604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090389, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3016043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090389, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3016043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090389, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3016043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090364, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2926042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090364, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2926042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090364, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2926042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090369, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2936041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090369, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2936041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090369, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2936041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090319, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.273604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090319, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.273604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090319, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.273604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090339, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.282604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090339, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.282604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090339, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.282604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090397, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3026044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090397, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3026044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090397, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3026044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1090374, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.2956042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1090374, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.2956042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1090374, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187617.2956042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090324, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.276604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090324, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.276604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090324, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.276604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090321, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.273604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090321, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.273604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090321, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.273604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090327, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.277604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090327, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.277604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090327, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.277604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090329, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2806041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090329, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2806041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090329, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.2806041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090400, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3026044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090400, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3026044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090400, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187617.3026044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:49:23.859537 | orchestrator | 2025-05-14 02:49:23.859548 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-14 02:49:23.859565 | orchestrator | Wednesday 14 May 2025 02:48:19 +0000 (0:00:34.065) 0:00:48.597 ********* 2025-05-14 02:49:23.859579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.859589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.859599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:49:23.859609 | orchestrator | 2025-05-14 02:49:23.859619 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-14 02:49:23.859629 | orchestrator | Wednesday 14 May 2025 02:48:20 +0000 (0:00:01.145) 0:00:49.743 ********* 2025-05-14 02:49:23.859639 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:49:23.859649 | orchestrator | 2025-05-14 02:49:23.859659 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-14 02:49:23.859674 | orchestrator | Wednesday 14 May 2025 02:48:23 +0000 (0:00:02.849) 0:00:52.592 ********* 2025-05-14 02:49:23.859683 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:49:23.859693 | orchestrator | 2025-05-14 02:49:23.859702 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-14 02:49:23.859712 | orchestrator | Wednesday 14 May 2025 02:48:25 +0000 (0:00:02.431) 0:00:55.023 ********* 2025-05-14 02:49:23.859729 | orchestrator | 2025-05-14 02:49:23.859738 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-14 02:49:23.859748 | orchestrator | Wednesday 14 May 2025 02:48:25 +0000 (0:00:00.062) 0:00:55.086 ********* 2025-05-14 02:49:23.859757 | orchestrator | 2025-05-14 02:49:23.859767 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-14 02:49:23.859781 | orchestrator | Wednesday 14 May 2025 02:48:25 +0000 (0:00:00.056) 0:00:55.142 ********* 2025-05-14 02:49:23.859790 | orchestrator | 2025-05-14 02:49:23.859799 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-14 02:49:23.859809 | orchestrator | Wednesday 14 May 2025 02:48:26 +0000 (0:00:00.195) 0:00:55.337 ********* 2025-05-14 02:49:23.859818 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:49:23.859828 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:49:23.859838 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:49:23.859847 | orchestrator | 2025-05-14 02:49:23.859857 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-14 02:49:23.859866 | orchestrator | Wednesday 14 May 2025 02:48:27 +0000 (0:00:01.910) 0:00:57.248 ********* 2025-05-14 02:49:23.859876 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:49:23.859885 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:49:23.859895 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-14 02:49:23.859905 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-14 02:49:23.859914 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:49:23.859924 | orchestrator | 2025-05-14 02:49:23.859934 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-14 02:49:23.859943 | orchestrator | Wednesday 14 May 2025 02:48:54 +0000 (0:00:27.017) 0:01:24.266 ********* 2025-05-14 02:49:23.859953 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:49:23.859962 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:49:23.859972 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:49:23.859981 | orchestrator | 2025-05-14 02:49:23.859991 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-14 02:49:23.860001 | orchestrator | Wednesday 14 May 2025 02:49:16 +0000 (0:00:21.788) 0:01:46.054 ********* 2025-05-14 02:49:23.860010 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:49:23.860020 | orchestrator | 2025-05-14 02:49:23.860029 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-14 02:49:23.860039 | orchestrator | Wednesday 14 May 2025 02:49:19 +0000 (0:00:02.251) 0:01:48.306 ********* 2025-05-14 02:49:23.860048 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:49:23.860057 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:49:23.860067 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:49:23.860076 | orchestrator | 2025-05-14 02:49:23.860086 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-14 02:49:23.860095 | orchestrator | Wednesday 14 May 2025 02:49:19 +0000 (0:00:00.392) 0:01:48.698 ********* 2025-05-14 02:49:23.860106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-14 02:49:23.860116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-14 02:49:23.860128 | orchestrator | 2025-05-14 02:49:23.860137 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-14 02:49:23.860147 | orchestrator | Wednesday 14 May 2025 02:49:21 +0000 (0:00:02.503) 0:01:51.201 ********* 2025-05-14 02:49:23.860163 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:49:23.860173 | orchestrator | 2025-05-14 02:49:23.860183 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:49:23.860193 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:49:23.860202 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:49:23.860227 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:49:23.860238 | orchestrator | 2025-05-14 02:49:23.860248 | orchestrator | 2025-05-14 02:49:23.860257 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:49:23.860267 | orchestrator | Wednesday 14 May 2025 02:49:22 +0000 (0:00:00.401) 0:01:51.602 ********* 2025-05-14 02:49:23.860277 | orchestrator | =============================================================================== 2025-05-14 02:49:23.860286 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 34.07s 2025-05-14 02:49:23.860347 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.02s 2025-05-14 02:49:23.860358 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 21.79s 2025-05-14 02:49:23.860368 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.85s 2025-05-14 02:49:23.860377 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.50s 2025-05-14 02:49:23.860387 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.43s 2025-05-14 02:49:23.860397 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.25s 2025-05-14 02:49:23.860407 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.91s 2025-05-14 02:49:23.860421 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.61s 2025-05-14 02:49:23.860431 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.61s 2025-05-14 02:49:23.860440 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.48s 2025-05-14 02:49:23.860450 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.39s 2025-05-14 02:49:23.860460 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.37s 2025-05-14 02:49:23.860469 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.15s 2025-05-14 02:49:23.860479 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.92s 2025-05-14 02:49:23.860488 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.82s 2025-05-14 02:49:23.860498 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.75s 2025-05-14 02:49:23.860507 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.72s 2025-05-14 02:49:23.860517 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.69s 2025-05-14 02:49:23.860527 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.59s 2025-05-14 02:49:23.860536 | orchestrator | 2025-05-14 02:49:23 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state STARTED 2025-05-14 02:49:23.860547 | orchestrator | 2025-05-14 02:49:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:26.900094 | orchestrator | 2025-05-14 02:49:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:26.901416 | orchestrator | 2025-05-14 02:49:26 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:26.903998 | orchestrator | 2025-05-14 02:49:26 | INFO  | Task 073326a7-72b7-413d-90a3-d4ad1957398b is in state SUCCESS 2025-05-14 02:49:26.904057 | orchestrator | 2025-05-14 02:49:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:29.956117 | orchestrator | 2025-05-14 02:49:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:29.957809 | orchestrator | 2025-05-14 02:49:29 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:29.957856 | orchestrator | 2025-05-14 02:49:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:33.009739 | orchestrator | 2025-05-14 02:49:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:33.012009 | orchestrator | 2025-05-14 02:49:33 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:33.012432 | orchestrator | 2025-05-14 02:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:36.060529 | orchestrator | 2025-05-14 02:49:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:36.062231 | orchestrator | 2025-05-14 02:49:36 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:36.062876 | orchestrator | 2025-05-14 02:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:39.101344 | orchestrator | 2025-05-14 02:49:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:39.101472 | orchestrator | 2025-05-14 02:49:39 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:39.101493 | orchestrator | 2025-05-14 02:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:42.151468 | orchestrator | 2025-05-14 02:49:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:42.151564 | orchestrator | 2025-05-14 02:49:42 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:42.151578 | orchestrator | 2025-05-14 02:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:45.190986 | orchestrator | 2025-05-14 02:49:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:45.192577 | orchestrator | 2025-05-14 02:49:45 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:45.192654 | orchestrator | 2025-05-14 02:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:48.233788 | orchestrator | 2025-05-14 02:49:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:48.237003 | orchestrator | 2025-05-14 02:49:48 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:48.237176 | orchestrator | 2025-05-14 02:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:51.288308 | orchestrator | 2025-05-14 02:49:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:51.288469 | orchestrator | 2025-05-14 02:49:51 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:51.288500 | orchestrator | 2025-05-14 02:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:54.340787 | orchestrator | 2025-05-14 02:49:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:54.342085 | orchestrator | 2025-05-14 02:49:54 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:54.342105 | orchestrator | 2025-05-14 02:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:57.393703 | orchestrator | 2025-05-14 02:49:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:49:57.396238 | orchestrator | 2025-05-14 02:49:57 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:49:57.396311 | orchestrator | 2025-05-14 02:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:00.444886 | orchestrator | 2025-05-14 02:50:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:00.446937 | orchestrator | 2025-05-14 02:50:00 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:00.447028 | orchestrator | 2025-05-14 02:50:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:03.494094 | orchestrator | 2025-05-14 02:50:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:03.498815 | orchestrator | 2025-05-14 02:50:03 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:03.498887 | orchestrator | 2025-05-14 02:50:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:06.536299 | orchestrator | 2025-05-14 02:50:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:06.538552 | orchestrator | 2025-05-14 02:50:06 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:06.538884 | orchestrator | 2025-05-14 02:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:09.587654 | orchestrator | 2025-05-14 02:50:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:09.589047 | orchestrator | 2025-05-14 02:50:09 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:09.589100 | orchestrator | 2025-05-14 02:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:12.647232 | orchestrator | 2025-05-14 02:50:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:12.647381 | orchestrator | 2025-05-14 02:50:12 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:12.647444 | orchestrator | 2025-05-14 02:50:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:15.692867 | orchestrator | 2025-05-14 02:50:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:15.694363 | orchestrator | 2025-05-14 02:50:15 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:15.694426 | orchestrator | 2025-05-14 02:50:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:18.731052 | orchestrator | 2025-05-14 02:50:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:18.731140 | orchestrator | 2025-05-14 02:50:18 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:18.731151 | orchestrator | 2025-05-14 02:50:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:21.769029 | orchestrator | 2025-05-14 02:50:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:21.769868 | orchestrator | 2025-05-14 02:50:21 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:21.769912 | orchestrator | 2025-05-14 02:50:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:24.850211 | orchestrator | 2025-05-14 02:50:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:24.851118 | orchestrator | 2025-05-14 02:50:24 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:24.851172 | orchestrator | 2025-05-14 02:50:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:27.902999 | orchestrator | 2025-05-14 02:50:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:27.906371 | orchestrator | 2025-05-14 02:50:27 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:27.906453 | orchestrator | 2025-05-14 02:50:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:30.965046 | orchestrator | 2025-05-14 02:50:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:30.966971 | orchestrator | 2025-05-14 02:50:30 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:30.967499 | orchestrator | 2025-05-14 02:50:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:34.004426 | orchestrator | 2025-05-14 02:50:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:34.007693 | orchestrator | 2025-05-14 02:50:34 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:34.007805 | orchestrator | 2025-05-14 02:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:37.052281 | orchestrator | 2025-05-14 02:50:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:37.054924 | orchestrator | 2025-05-14 02:50:37 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:37.054993 | orchestrator | 2025-05-14 02:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:40.095761 | orchestrator | 2025-05-14 02:50:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:40.095868 | orchestrator | 2025-05-14 02:50:40 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:40.095884 | orchestrator | 2025-05-14 02:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:43.132568 | orchestrator | 2025-05-14 02:50:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:43.132672 | orchestrator | 2025-05-14 02:50:43 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:43.132680 | orchestrator | 2025-05-14 02:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:46.196871 | orchestrator | 2025-05-14 02:50:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:46.197979 | orchestrator | 2025-05-14 02:50:46 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:46.198973 | orchestrator | 2025-05-14 02:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:49.245388 | orchestrator | 2025-05-14 02:50:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:49.246240 | orchestrator | 2025-05-14 02:50:49 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:49.246277 | orchestrator | 2025-05-14 02:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:52.292118 | orchestrator | 2025-05-14 02:50:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:52.292286 | orchestrator | 2025-05-14 02:50:52 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:52.292312 | orchestrator | 2025-05-14 02:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:55.333430 | orchestrator | 2025-05-14 02:50:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:55.336381 | orchestrator | 2025-05-14 02:50:55 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:55.336453 | orchestrator | 2025-05-14 02:50:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:58.394409 | orchestrator | 2025-05-14 02:50:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:50:58.394862 | orchestrator | 2025-05-14 02:50:58 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:50:58.394902 | orchestrator | 2025-05-14 02:50:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:01.446257 | orchestrator | 2025-05-14 02:51:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:01.446817 | orchestrator | 2025-05-14 02:51:01 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:01.446861 | orchestrator | 2025-05-14 02:51:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:04.498559 | orchestrator | 2025-05-14 02:51:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:04.500741 | orchestrator | 2025-05-14 02:51:04 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:04.500814 | orchestrator | 2025-05-14 02:51:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:07.556228 | orchestrator | 2025-05-14 02:51:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:07.558089 | orchestrator | 2025-05-14 02:51:07 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:07.558189 | orchestrator | 2025-05-14 02:51:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:10.618368 | orchestrator | 2025-05-14 02:51:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:10.619909 | orchestrator | 2025-05-14 02:51:10 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:10.619966 | orchestrator | 2025-05-14 02:51:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:13.674786 | orchestrator | 2025-05-14 02:51:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:13.676364 | orchestrator | 2025-05-14 02:51:13 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:13.676395 | orchestrator | 2025-05-14 02:51:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:16.724492 | orchestrator | 2025-05-14 02:51:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:16.724594 | orchestrator | 2025-05-14 02:51:16 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:16.724609 | orchestrator | 2025-05-14 02:51:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:19.780447 | orchestrator | 2025-05-14 02:51:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:19.781504 | orchestrator | 2025-05-14 02:51:19 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:19.781551 | orchestrator | 2025-05-14 02:51:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:22.830811 | orchestrator | 2025-05-14 02:51:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:22.831540 | orchestrator | 2025-05-14 02:51:22 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:22.831568 | orchestrator | 2025-05-14 02:51:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:25.887713 | orchestrator | 2025-05-14 02:51:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:25.888768 | orchestrator | 2025-05-14 02:51:25 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:25.888834 | orchestrator | 2025-05-14 02:51:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:28.943305 | orchestrator | 2025-05-14 02:51:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:28.944170 | orchestrator | 2025-05-14 02:51:28 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:28.944186 | orchestrator | 2025-05-14 02:51:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:32.019746 | orchestrator | 2025-05-14 02:51:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:32.022710 | orchestrator | 2025-05-14 02:51:32 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:32.022773 | orchestrator | 2025-05-14 02:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:35.070896 | orchestrator | 2025-05-14 02:51:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:35.071879 | orchestrator | 2025-05-14 02:51:35 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:35.071899 | orchestrator | 2025-05-14 02:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:38.110109 | orchestrator | 2025-05-14 02:51:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:38.110615 | orchestrator | 2025-05-14 02:51:38 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:38.110648 | orchestrator | 2025-05-14 02:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:41.149596 | orchestrator | 2025-05-14 02:51:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:41.150274 | orchestrator | 2025-05-14 02:51:41 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:41.150294 | orchestrator | 2025-05-14 02:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:44.191870 | orchestrator | 2025-05-14 02:51:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:44.193745 | orchestrator | 2025-05-14 02:51:44 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:44.193793 | orchestrator | 2025-05-14 02:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:47.235948 | orchestrator | 2025-05-14 02:51:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:47.237687 | orchestrator | 2025-05-14 02:51:47 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:47.237727 | orchestrator | 2025-05-14 02:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:50.288579 | orchestrator | 2025-05-14 02:51:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:50.291698 | orchestrator | 2025-05-14 02:51:50 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:50.291886 | orchestrator | 2025-05-14 02:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:53.342123 | orchestrator | 2025-05-14 02:51:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:53.344454 | orchestrator | 2025-05-14 02:51:53 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:53.344574 | orchestrator | 2025-05-14 02:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:56.397574 | orchestrator | 2025-05-14 02:51:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:56.398983 | orchestrator | 2025-05-14 02:51:56 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:56.399026 | orchestrator | 2025-05-14 02:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:59.456609 | orchestrator | 2025-05-14 02:51:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:51:59.461823 | orchestrator | 2025-05-14 02:51:59 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:51:59.461925 | orchestrator | 2025-05-14 02:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:02.510378 | orchestrator | 2025-05-14 02:52:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:02.511962 | orchestrator | 2025-05-14 02:52:02 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:02.511995 | orchestrator | 2025-05-14 02:52:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:05.571199 | orchestrator | 2025-05-14 02:52:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:05.572581 | orchestrator | 2025-05-14 02:52:05 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:05.572634 | orchestrator | 2025-05-14 02:52:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:08.623812 | orchestrator | 2025-05-14 02:52:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:08.626963 | orchestrator | 2025-05-14 02:52:08 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:08.627488 | orchestrator | 2025-05-14 02:52:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:11.675910 | orchestrator | 2025-05-14 02:52:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:11.677462 | orchestrator | 2025-05-14 02:52:11 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:11.677518 | orchestrator | 2025-05-14 02:52:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:14.726787 | orchestrator | 2025-05-14 02:52:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:14.728042 | orchestrator | 2025-05-14 02:52:14 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:14.728110 | orchestrator | 2025-05-14 02:52:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:17.774947 | orchestrator | 2025-05-14 02:52:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:17.775255 | orchestrator | 2025-05-14 02:52:17 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:17.775281 | orchestrator | 2025-05-14 02:52:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:20.823751 | orchestrator | 2025-05-14 02:52:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:20.823854 | orchestrator | 2025-05-14 02:52:20 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:20.823868 | orchestrator | 2025-05-14 02:52:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:23.878947 | orchestrator | 2025-05-14 02:52:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:23.880826 | orchestrator | 2025-05-14 02:52:23 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:23.881257 | orchestrator | 2025-05-14 02:52:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:26.934706 | orchestrator | 2025-05-14 02:52:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:26.938320 | orchestrator | 2025-05-14 02:52:26 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:26.938412 | orchestrator | 2025-05-14 02:52:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:29.993620 | orchestrator | 2025-05-14 02:52:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:29.995725 | orchestrator | 2025-05-14 02:52:29 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:29.997601 | orchestrator | 2025-05-14 02:52:29 | INFO  | Task 4e30b37e-db97-4576-803b-e6d93fcae091 is in state STARTED 2025-05-14 02:52:29.997658 | orchestrator | 2025-05-14 02:52:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:33.054287 | orchestrator | 2025-05-14 02:52:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:33.056000 | orchestrator | 2025-05-14 02:52:33 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:33.057760 | orchestrator | 2025-05-14 02:52:33 | INFO  | Task 4e30b37e-db97-4576-803b-e6d93fcae091 is in state STARTED 2025-05-14 02:52:33.057811 | orchestrator | 2025-05-14 02:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:36.110839 | orchestrator | 2025-05-14 02:52:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:36.112571 | orchestrator | 2025-05-14 02:52:36 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:36.113493 | orchestrator | 2025-05-14 02:52:36 | INFO  | Task 4e30b37e-db97-4576-803b-e6d93fcae091 is in state STARTED 2025-05-14 02:52:36.113724 | orchestrator | 2025-05-14 02:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:39.166247 | orchestrator | 2025-05-14 02:52:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:39.167666 | orchestrator | 2025-05-14 02:52:39 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:39.169972 | orchestrator | 2025-05-14 02:52:39 | INFO  | Task 4e30b37e-db97-4576-803b-e6d93fcae091 is in state STARTED 2025-05-14 02:52:39.170387 | orchestrator | 2025-05-14 02:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:42.209924 | orchestrator | 2025-05-14 02:52:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:42.210097 | orchestrator | 2025-05-14 02:52:42 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:42.210471 | orchestrator | 2025-05-14 02:52:42 | INFO  | Task 4e30b37e-db97-4576-803b-e6d93fcae091 is in state SUCCESS 2025-05-14 02:52:42.212643 | orchestrator | 2025-05-14 02:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:45.264339 | orchestrator | 2025-05-14 02:52:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:45.265476 | orchestrator | 2025-05-14 02:52:45 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:45.265561 | orchestrator | 2025-05-14 02:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:48.314478 | orchestrator | 2025-05-14 02:52:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:48.315466 | orchestrator | 2025-05-14 02:52:48 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:48.315609 | orchestrator | 2025-05-14 02:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:51.364682 | orchestrator | 2025-05-14 02:52:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:51.366900 | orchestrator | 2025-05-14 02:52:51 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:51.367131 | orchestrator | 2025-05-14 02:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:54.410303 | orchestrator | 2025-05-14 02:52:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:54.411072 | orchestrator | 2025-05-14 02:52:54 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:54.411154 | orchestrator | 2025-05-14 02:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:57.471466 | orchestrator | 2025-05-14 02:52:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:52:57.472868 | orchestrator | 2025-05-14 02:52:57 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:52:57.472912 | orchestrator | 2025-05-14 02:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:00.516691 | orchestrator | 2025-05-14 02:53:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:00.517810 | orchestrator | 2025-05-14 02:53:00 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:00.517904 | orchestrator | 2025-05-14 02:53:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:03.569970 | orchestrator | 2025-05-14 02:53:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:03.570471 | orchestrator | 2025-05-14 02:53:03 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:03.570503 | orchestrator | 2025-05-14 02:53:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:06.612575 | orchestrator | 2025-05-14 02:53:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:06.612955 | orchestrator | 2025-05-14 02:53:06 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:06.612983 | orchestrator | 2025-05-14 02:53:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:09.647582 | orchestrator | 2025-05-14 02:53:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:09.649177 | orchestrator | 2025-05-14 02:53:09 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:09.649241 | orchestrator | 2025-05-14 02:53:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:12.690754 | orchestrator | 2025-05-14 02:53:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:12.693618 | orchestrator | 2025-05-14 02:53:12 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:12.693673 | orchestrator | 2025-05-14 02:53:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:15.727332 | orchestrator | 2025-05-14 02:53:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:15.729408 | orchestrator | 2025-05-14 02:53:15 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:15.729567 | orchestrator | 2025-05-14 02:53:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:18.774940 | orchestrator | 2025-05-14 02:53:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:18.777271 | orchestrator | 2025-05-14 02:53:18 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:18.777318 | orchestrator | 2025-05-14 02:53:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:21.830152 | orchestrator | 2025-05-14 02:53:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:21.833007 | orchestrator | 2025-05-14 02:53:21 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:21.833116 | orchestrator | 2025-05-14 02:53:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:24.876222 | orchestrator | 2025-05-14 02:53:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:24.877402 | orchestrator | 2025-05-14 02:53:24 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:24.877482 | orchestrator | 2025-05-14 02:53:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:27.925841 | orchestrator | 2025-05-14 02:53:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:27.927460 | orchestrator | 2025-05-14 02:53:27 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:27.927561 | orchestrator | 2025-05-14 02:53:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:30.981905 | orchestrator | 2025-05-14 02:53:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:30.981996 | orchestrator | 2025-05-14 02:53:30 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:30.982012 | orchestrator | 2025-05-14 02:53:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:34.045500 | orchestrator | 2025-05-14 02:53:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:34.047055 | orchestrator | 2025-05-14 02:53:34 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:34.047168 | orchestrator | 2025-05-14 02:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:37.093526 | orchestrator | 2025-05-14 02:53:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:37.094624 | orchestrator | 2025-05-14 02:53:37 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:37.094647 | orchestrator | 2025-05-14 02:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:40.135323 | orchestrator | 2025-05-14 02:53:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:40.135804 | orchestrator | 2025-05-14 02:53:40 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:40.135849 | orchestrator | 2025-05-14 02:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:43.179658 | orchestrator | 2025-05-14 02:53:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:43.180327 | orchestrator | 2025-05-14 02:53:43 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:43.180533 | orchestrator | 2025-05-14 02:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:46.228896 | orchestrator | 2025-05-14 02:53:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:46.228997 | orchestrator | 2025-05-14 02:53:46 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state STARTED 2025-05-14 02:53:46.229012 | orchestrator | 2025-05-14 02:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:49.268960 | orchestrator | 2025-05-14 02:53:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:49.273790 | orchestrator | 2025-05-14 02:53:49 | INFO  | Task 5f376801-ae55-43e5-8a8d-9b61e75729d8 is in state SUCCESS 2025-05-14 02:53:49.275950 | orchestrator | 2025-05-14 02:53:49.275992 | orchestrator | 2025-05-14 02:53:49.276005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:53:49.276017 | orchestrator | 2025-05-14 02:53:49.276028 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:53:49.276039 | orchestrator | Wednesday 14 May 2025 02:46:47 +0000 (0:00:00.188) 0:00:00.188 ********* 2025-05-14 02:53:49.276086 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:53:49.276098 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:53:49.276109 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:53:49.276120 | orchestrator | 2025-05-14 02:53:49.276131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:53:49.276143 | orchestrator | Wednesday 14 May 2025 02:46:47 +0000 (0:00:00.310) 0:00:00.499 ********* 2025-05-14 02:53:49.276154 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-14 02:53:49.276165 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-14 02:53:49.276176 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-14 02:53:49.276188 | orchestrator | 2025-05-14 02:53:49.276199 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-14 02:53:49.276210 | orchestrator | 2025-05-14 02:53:49.276221 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-14 02:53:49.276232 | orchestrator | Wednesday 14 May 2025 02:46:48 +0000 (0:00:00.479) 0:00:00.978 ********* 2025-05-14 02:53:49.276243 | orchestrator | 2025-05-14 02:53:49.276254 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-05-14 02:53:49.276265 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:53:49.276276 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:53:49.276287 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:53:49.276298 | orchestrator | 2025-05-14 02:53:49.276308 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:53:49.276320 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:53:49.276333 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:53:49.276344 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:53:49.276355 | orchestrator | 2025-05-14 02:53:49.276366 | orchestrator | 2025-05-14 02:53:49.276377 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:53:49.276388 | orchestrator | Wednesday 14 May 2025 02:49:25 +0000 (0:02:36.848) 0:02:37.827 ********* 2025-05-14 02:53:49.276415 | orchestrator | =============================================================================== 2025-05-14 02:53:49.276426 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 156.85s 2025-05-14 02:53:49.276437 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2025-05-14 02:53:49.276448 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-05-14 02:53:49.276459 | orchestrator | 2025-05-14 02:53:49.276470 | orchestrator | None 2025-05-14 02:53:49.276481 | orchestrator | 2025-05-14 02:53:49.276492 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:53:49.276503 | orchestrator | 2025-05-14 02:53:49.276514 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-14 02:53:49.276525 | orchestrator | Wednesday 14 May 2025 02:45:19 +0000 (0:00:00.556) 0:00:00.556 ********* 2025-05-14 02:53:49.276535 | orchestrator | changed: [testbed-manager] 2025-05-14 02:53:49.276547 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.276560 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:53:49.276731 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:53:49.276745 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.276780 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.276792 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.276805 | orchestrator | 2025-05-14 02:53:49.276817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:53:49.276829 | orchestrator | Wednesday 14 May 2025 02:45:20 +0000 (0:00:01.374) 0:00:01.931 ********* 2025-05-14 02:53:49.276842 | orchestrator | changed: [testbed-manager] 2025-05-14 02:53:49.276854 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.276866 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:53:49.276878 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:53:49.276890 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.276901 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.276912 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.276923 | orchestrator | 2025-05-14 02:53:49.276934 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:53:49.276945 | orchestrator | Wednesday 14 May 2025 02:45:21 +0000 (0:00:01.154) 0:00:03.086 ********* 2025-05-14 02:53:49.276955 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-14 02:53:49.276966 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-14 02:53:49.276977 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-14 02:53:49.276988 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-14 02:53:49.276999 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-14 02:53:49.277010 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-14 02:53:49.277020 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-14 02:53:49.277031 | orchestrator | 2025-05-14 02:53:49.277042 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-14 02:53:49.277083 | orchestrator | 2025-05-14 02:53:49.277095 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-14 02:53:49.277106 | orchestrator | Wednesday 14 May 2025 02:45:22 +0000 (0:00:01.299) 0:00:04.385 ********* 2025-05-14 02:53:49.277116 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:53:49.277127 | orchestrator | 2025-05-14 02:53:49.277137 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-14 02:53:49.277162 | orchestrator | Wednesday 14 May 2025 02:45:23 +0000 (0:00:00.961) 0:00:05.347 ********* 2025-05-14 02:53:49.277174 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-14 02:53:49.277184 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-14 02:53:49.277195 | orchestrator | 2025-05-14 02:53:49.277206 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-14 02:53:49.277217 | orchestrator | Wednesday 14 May 2025 02:45:28 +0000 (0:00:04.885) 0:00:10.232 ********* 2025-05-14 02:53:49.277227 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:53:49.277238 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:53:49.277249 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.277259 | orchestrator | 2025-05-14 02:53:49.277270 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-14 02:53:49.277281 | orchestrator | Wednesday 14 May 2025 02:45:33 +0000 (0:00:05.000) 0:00:15.233 ********* 2025-05-14 02:53:49.277291 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.277302 | orchestrator | 2025-05-14 02:53:49.277313 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-14 02:53:49.277323 | orchestrator | Wednesday 14 May 2025 02:45:34 +0000 (0:00:00.964) 0:00:16.197 ********* 2025-05-14 02:53:49.277334 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.277345 | orchestrator | 2025-05-14 02:53:49.277355 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-14 02:53:49.277366 | orchestrator | Wednesday 14 May 2025 02:45:37 +0000 (0:00:02.277) 0:00:18.474 ********* 2025-05-14 02:53:49.277385 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.277396 | orchestrator | 2025-05-14 02:53:49.277407 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 02:53:49.277418 | orchestrator | Wednesday 14 May 2025 02:45:40 +0000 (0:00:03.610) 0:00:22.085 ********* 2025-05-14 02:53:49.277429 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.277439 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.277450 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.277461 | orchestrator | 2025-05-14 02:53:49.277471 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-14 02:53:49.277482 | orchestrator | Wednesday 14 May 2025 02:45:41 +0000 (0:00:00.898) 0:00:22.983 ********* 2025-05-14 02:53:49.277507 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:53:49.277529 | orchestrator | 2025-05-14 02:53:49.277541 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-14 02:53:49.277551 | orchestrator | Wednesday 14 May 2025 02:46:14 +0000 (0:00:32.690) 0:00:55.673 ********* 2025-05-14 02:53:49.277562 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.277573 | orchestrator | 2025-05-14 02:53:49.277590 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-14 02:53:49.277600 | orchestrator | Wednesday 14 May 2025 02:46:27 +0000 (0:00:13.494) 0:01:09.167 ********* 2025-05-14 02:53:49.277611 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:53:49.277622 | orchestrator | 2025-05-14 02:53:49.277633 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-14 02:53:49.277643 | orchestrator | Wednesday 14 May 2025 02:46:38 +0000 (0:00:10.390) 0:01:19.558 ********* 2025-05-14 02:53:49.277654 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:53:49.277665 | orchestrator | 2025-05-14 02:53:49.277676 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-14 02:53:49.277687 | orchestrator | Wednesday 14 May 2025 02:46:38 +0000 (0:00:00.778) 0:01:20.336 ********* 2025-05-14 02:53:49.277697 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.277708 | orchestrator | 2025-05-14 02:53:49.277719 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 02:53:49.277730 | orchestrator | Wednesday 14 May 2025 02:46:39 +0000 (0:00:00.501) 0:01:20.838 ********* 2025-05-14 02:53:49.277740 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:53:49.277751 | orchestrator | 2025-05-14 02:53:49.277762 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-14 02:53:49.277773 | orchestrator | Wednesday 14 May 2025 02:46:40 +0000 (0:00:00.619) 0:01:21.458 ********* 2025-05-14 02:53:49.277784 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:53:49.277794 | orchestrator | 2025-05-14 02:53:49.277805 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-14 02:53:49.277816 | orchestrator | Wednesday 14 May 2025 02:46:56 +0000 (0:00:16.231) 0:01:37.690 ********* 2025-05-14 02:53:49.277827 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.277837 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.277848 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.277859 | orchestrator | 2025-05-14 02:53:49.277869 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-14 02:53:49.277880 | orchestrator | 2025-05-14 02:53:49.277891 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-14 02:53:49.277902 | orchestrator | Wednesday 14 May 2025 02:46:56 +0000 (0:00:00.445) 0:01:38.135 ********* 2025-05-14 02:53:49.277912 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:53:49.277923 | orchestrator | 2025-05-14 02:53:49.277934 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-14 02:53:49.278169 | orchestrator | Wednesday 14 May 2025 02:46:57 +0000 (0:00:01.031) 0:01:39.167 ********* 2025-05-14 02:53:49.278196 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278208 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278219 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.278230 | orchestrator | 2025-05-14 02:53:49.278241 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-14 02:53:49.278252 | orchestrator | Wednesday 14 May 2025 02:47:00 +0000 (0:00:02.529) 0:01:41.696 ********* 2025-05-14 02:53:49.278262 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278273 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278284 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.278295 | orchestrator | 2025-05-14 02:53:49.278306 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-14 02:53:49.278327 | orchestrator | Wednesday 14 May 2025 02:47:02 +0000 (0:00:02.217) 0:01:43.914 ********* 2025-05-14 02:53:49.278338 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.278350 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278360 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278371 | orchestrator | 2025-05-14 02:53:49.278382 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-14 02:53:49.278393 | orchestrator | Wednesday 14 May 2025 02:47:02 +0000 (0:00:00.422) 0:01:44.337 ********* 2025-05-14 02:53:49.278403 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 02:53:49.278414 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278425 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 02:53:49.278436 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278447 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-14 02:53:49.278458 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-14 02:53:49.278469 | orchestrator | 2025-05-14 02:53:49.278480 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-14 02:53:49.278491 | orchestrator | Wednesday 14 May 2025 02:47:11 +0000 (0:00:08.566) 0:01:52.904 ********* 2025-05-14 02:53:49.278502 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.278513 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278523 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278534 | orchestrator | 2025-05-14 02:53:49.278545 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-14 02:53:49.278556 | orchestrator | Wednesday 14 May 2025 02:47:11 +0000 (0:00:00.463) 0:01:53.367 ********* 2025-05-14 02:53:49.278567 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-14 02:53:49.278577 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.278588 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 02:53:49.278599 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278610 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 02:53:49.278621 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278632 | orchestrator | 2025-05-14 02:53:49.278643 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-14 02:53:49.278653 | orchestrator | Wednesday 14 May 2025 02:47:12 +0000 (0:00:00.991) 0:01:54.359 ********* 2025-05-14 02:53:49.278664 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278675 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278686 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.278697 | orchestrator | 2025-05-14 02:53:49.278707 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-14 02:53:49.278718 | orchestrator | Wednesday 14 May 2025 02:47:13 +0000 (0:00:00.494) 0:01:54.853 ********* 2025-05-14 02:53:49.278729 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278746 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278757 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.278768 | orchestrator | 2025-05-14 02:53:49.278779 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-14 02:53:49.278790 | orchestrator | Wednesday 14 May 2025 02:47:14 +0000 (0:00:01.160) 0:01:56.014 ********* 2025-05-14 02:53:49.278808 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278819 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278830 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.278840 | orchestrator | 2025-05-14 02:53:49.278851 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-14 02:53:49.278862 | orchestrator | Wednesday 14 May 2025 02:47:17 +0000 (0:00:02.719) 0:01:58.734 ********* 2025-05-14 02:53:49.278873 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278884 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278895 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:53:49.278920 | orchestrator | 2025-05-14 02:53:49.278932 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-14 02:53:49.278943 | orchestrator | Wednesday 14 May 2025 02:47:37 +0000 (0:00:19.895) 0:02:18.629 ********* 2025-05-14 02:53:49.278964 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.278975 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.278986 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:53:49.278997 | orchestrator | 2025-05-14 02:53:49.279008 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-14 02:53:49.279019 | orchestrator | Wednesday 14 May 2025 02:47:48 +0000 (0:00:11.192) 0:02:29.822 ********* 2025-05-14 02:53:49.279030 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:53:49.279041 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.279131 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.279142 | orchestrator | 2025-05-14 02:53:49.279176 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-14 02:53:49.279188 | orchestrator | Wednesday 14 May 2025 02:47:49 +0000 (0:00:01.254) 0:02:31.077 ********* 2025-05-14 02:53:49.279199 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.279210 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.279220 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.279231 | orchestrator | 2025-05-14 02:53:49.279242 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-14 02:53:49.279253 | orchestrator | Wednesday 14 May 2025 02:48:00 +0000 (0:00:11.173) 0:02:42.251 ********* 2025-05-14 02:53:49.279276 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.279287 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.279298 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.279320 | orchestrator | 2025-05-14 02:53:49.279331 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-14 02:53:49.279386 | orchestrator | Wednesday 14 May 2025 02:48:01 +0000 (0:00:01.139) 0:02:43.390 ********* 2025-05-14 02:53:49.279412 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.279423 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.279434 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.279445 | orchestrator | 2025-05-14 02:53:49.279456 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-14 02:53:49.279467 | orchestrator | 2025-05-14 02:53:49.279477 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 02:53:49.279488 | orchestrator | Wednesday 14 May 2025 02:48:02 +0000 (0:00:00.383) 0:02:43.774 ********* 2025-05-14 02:53:49.279506 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:53:49.279517 | orchestrator | 2025-05-14 02:53:49.279528 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-14 02:53:49.279539 | orchestrator | Wednesday 14 May 2025 02:48:02 +0000 (0:00:00.559) 0:02:44.333 ********* 2025-05-14 02:53:49.279549 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-14 02:53:49.279656 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-14 02:53:49.279668 | orchestrator | 2025-05-14 02:53:49.279678 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-14 02:53:49.279689 | orchestrator | Wednesday 14 May 2025 02:48:06 +0000 (0:00:03.617) 0:02:47.951 ********* 2025-05-14 02:53:49.279709 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-14 02:53:49.279721 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-14 02:53:49.279732 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-14 02:53:49.279753 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-14 02:53:49.279764 | orchestrator | 2025-05-14 02:53:49.279775 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-14 02:53:49.279786 | orchestrator | Wednesday 14 May 2025 02:48:13 +0000 (0:00:06.977) 0:02:54.928 ********* 2025-05-14 02:53:49.279839 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:53:49.279849 | orchestrator | 2025-05-14 02:53:49.279860 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-14 02:53:49.279871 | orchestrator | Wednesday 14 May 2025 02:48:16 +0000 (0:00:03.331) 0:02:58.259 ********* 2025-05-14 02:53:49.279882 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:53:49.279893 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-14 02:53:49.279904 | orchestrator | 2025-05-14 02:53:49.279915 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-14 02:53:49.279926 | orchestrator | Wednesday 14 May 2025 02:48:20 +0000 (0:00:03.983) 0:03:02.243 ********* 2025-05-14 02:53:49.279937 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:53:49.279947 | orchestrator | 2025-05-14 02:53:49.279958 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-14 02:53:49.279969 | orchestrator | Wednesday 14 May 2025 02:48:24 +0000 (0:00:03.652) 0:03:05.896 ********* 2025-05-14 02:53:49.279980 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-14 02:53:49.279991 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-14 02:53:49.280002 | orchestrator | 2025-05-14 02:53:49.280013 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-14 02:53:49.280024 | orchestrator | Wednesday 14 May 2025 02:48:32 +0000 (0:00:08.452) 0:03:14.348 ********* 2025-05-14 02:53:49.280199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.280246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.280274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.280288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.280300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.280312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.280330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.280349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.280361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.280373 | orchestrator | 2025-05-14 02:53:49.280384 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-14 02:53:49.280396 | orchestrator | Wednesday 14 May 2025 02:48:34 +0000 (0:00:01.389) 0:03:15.737 ********* 2025-05-14 02:53:49.280406 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.280418 | orchestrator | 2025-05-14 02:53:49.280429 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-14 02:53:49.280439 | orchestrator | Wednesday 14 May 2025 02:48:34 +0000 (0:00:00.249) 0:03:15.987 ********* 2025-05-14 02:53:49.280450 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.280461 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.280476 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.280487 | orchestrator | 2025-05-14 02:53:49.280497 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-14 02:53:49.280508 | orchestrator | Wednesday 14 May 2025 02:48:34 +0000 (0:00:00.347) 0:03:16.334 ********* 2025-05-14 02:53:49.280519 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:53:49.280530 | orchestrator | 2025-05-14 02:53:49.280540 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-14 02:53:49.280551 | orchestrator | Wednesday 14 May 2025 02:48:35 +0000 (0:00:00.499) 0:03:16.834 ********* 2025-05-14 02:53:49.280562 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.280573 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.280584 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.280594 | orchestrator | 2025-05-14 02:53:49.280605 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 02:53:49.280616 | orchestrator | Wednesday 14 May 2025 02:48:35 +0000 (0:00:00.271) 0:03:17.105 ********* 2025-05-14 02:53:49.280627 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:53:49.280638 | orchestrator | 2025-05-14 02:53:49.280648 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-14 02:53:49.280659 | orchestrator | Wednesday 14 May 2025 02:48:36 +0000 (0:00:00.782) 0:03:17.887 ********* 2025-05-14 02:53:49.280671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.280699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.280718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.280731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.280751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.280768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.280780 | orchestrator | 2025-05-14 02:53:49.280791 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-14 02:53:49.280802 | orchestrator | Wednesday 14 May 2025 02:48:39 +0000 (0:00:02.729) 0:03:20.617 ********* 2025-05-14 02:53:49.280813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:53:49.280830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.280842 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.280854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:53:49.280875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.280887 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.280906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:53:49.280918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.280930 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.280941 | orchestrator | 2025-05-14 02:53:49.280956 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-14 02:53:49.280967 | orchestrator | Wednesday 14 May 2025 02:48:39 +0000 (0:00:00.592) 0:03:21.209 ********* 2025-05-14 02:53:49.280979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:53:49.280998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281009 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.281029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:53:49.281042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281075 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.281092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:53:49.281111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281123 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.281134 | orchestrator | 2025-05-14 02:53:49.281145 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-14 02:53:49.281156 | orchestrator | Wednesday 14 May 2025 02:48:40 +0000 (0:00:01.150) 0:03:22.360 ********* 2025-05-14 02:53:49.281176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.281198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.281218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.281230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.281249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.281273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.281306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281317 | orchestrator | 2025-05-14 02:53:49.281328 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-14 02:53:49.281339 | orchestrator | Wednesday 14 May 2025 02:48:43 +0000 (0:00:02.718) 0:03:25.079 ********* 2025-05-14 02:53:49.281358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.281371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.281388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.281407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.281420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.281452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.281488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281500 | orchestrator | 2025-05-14 02:53:49.281511 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-14 02:53:49.281522 | orchestrator | Wednesday 14 May 2025 02:48:50 +0000 (0:00:06.382) 0:03:31.461 ********* 2025-05-14 02:53:49.281534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:53:49.281552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281575 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.281592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:53:49.281610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281634 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.281653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:53:49.281665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281694 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.281706 | orchestrator | 2025-05-14 02:53:49.281717 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-14 02:53:49.281732 | orchestrator | Wednesday 14 May 2025 02:48:50 +0000 (0:00:00.799) 0:03:32.260 ********* 2025-05-14 02:53:49.281743 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.281755 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:53:49.281766 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:53:49.281777 | orchestrator | 2025-05-14 02:53:49.281787 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-14 02:53:49.281798 | orchestrator | Wednesday 14 May 2025 02:48:52 +0000 (0:00:01.682) 0:03:33.942 ********* 2025-05-14 02:53:49.281809 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.281820 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.281831 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.281842 | orchestrator | 2025-05-14 02:53:49.281853 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-14 02:53:49.281864 | orchestrator | Wednesday 14 May 2025 02:48:52 +0000 (0:00:00.463) 0:03:34.406 ********* 2025-05-14 02:53:49.281876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.281896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.281920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:53:49.281933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.281945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.281957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.281993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.282005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.282131 | orchestrator | 2025-05-14 02:53:49.282145 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-14 02:53:49.282156 | orchestrator | Wednesday 14 May 2025 02:48:54 +0000 (0:00:01.959) 0:03:36.366 ********* 2025-05-14 02:53:49.282167 | orchestrator | 2025-05-14 02:53:49.282178 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-14 02:53:49.282199 | orchestrator | Wednesday 14 May 2025 02:48:55 +0000 (0:00:00.372) 0:03:36.738 ********* 2025-05-14 02:53:49.282211 | orchestrator | 2025-05-14 02:53:49.282221 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-14 02:53:49.282232 | orchestrator | Wednesday 14 May 2025 02:48:55 +0000 (0:00:00.127) 0:03:36.866 ********* 2025-05-14 02:53:49.282242 | orchestrator | 2025-05-14 02:53:49.282253 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-14 02:53:49.282264 | orchestrator | Wednesday 14 May 2025 02:48:55 +0000 (0:00:00.277) 0:03:37.144 ********* 2025-05-14 02:53:49.282275 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.282285 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:53:49.282297 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:53:49.282307 | orchestrator | 2025-05-14 02:53:49.282318 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-14 02:53:49.282329 | orchestrator | Wednesday 14 May 2025 02:49:16 +0000 (0:00:20.903) 0:03:58.047 ********* 2025-05-14 02:53:49.282339 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:53:49.282350 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.282361 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:53:49.282371 | orchestrator | 2025-05-14 02:53:49.282382 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-14 02:53:49.282392 | orchestrator | 2025-05-14 02:53:49.282403 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 02:53:49.282414 | orchestrator | Wednesday 14 May 2025 02:49:27 +0000 (0:00:10.761) 0:04:08.809 ********* 2025-05-14 02:53:49.282425 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:53:49.282437 | orchestrator | 2025-05-14 02:53:49.282448 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 02:53:49.282458 | orchestrator | Wednesday 14 May 2025 02:49:28 +0000 (0:00:01.107) 0:04:09.916 ********* 2025-05-14 02:53:49.282469 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.282480 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.282491 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.282510 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.282520 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.282531 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.282542 | orchestrator | 2025-05-14 02:53:49.282553 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-14 02:53:49.282563 | orchestrator | Wednesday 14 May 2025 02:49:29 +0000 (0:00:00.638) 0:04:10.555 ********* 2025-05-14 02:53:49.282574 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.282585 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.282596 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.282607 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:53:49.282617 | orchestrator | 2025-05-14 02:53:49.282627 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-14 02:53:49.282636 | orchestrator | Wednesday 14 May 2025 02:49:30 +0000 (0:00:01.027) 0:04:11.583 ********* 2025-05-14 02:53:49.282646 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-14 02:53:49.282656 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-14 02:53:49.282671 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-14 02:53:49.282687 | orchestrator | 2025-05-14 02:53:49.282711 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-14 02:53:49.282727 | orchestrator | Wednesday 14 May 2025 02:49:31 +0000 (0:00:00.861) 0:04:12.444 ********* 2025-05-14 02:53:49.282743 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-14 02:53:49.282759 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-14 02:53:49.282776 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-14 02:53:49.282788 | orchestrator | 2025-05-14 02:53:49.282797 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-14 02:53:49.282807 | orchestrator | Wednesday 14 May 2025 02:49:32 +0000 (0:00:01.330) 0:04:13.775 ********* 2025-05-14 02:53:49.282817 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-14 02:53:49.282826 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.282836 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-14 02:53:49.282845 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.282855 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-14 02:53:49.282864 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.282874 | orchestrator | 2025-05-14 02:53:49.282883 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-14 02:53:49.282934 | orchestrator | Wednesday 14 May 2025 02:49:33 +0000 (0:00:00.662) 0:04:14.438 ********* 2025-05-14 02:53:49.282944 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-14 02:53:49.282954 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-14 02:53:49.282975 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.282985 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-14 02:53:49.282995 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-14 02:53:49.283004 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-14 02:53:49.283014 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-14 02:53:49.283039 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-14 02:53:49.283079 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.283090 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-14 02:53:49.283100 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-14 02:53:49.283138 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.283149 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-14 02:53:49.283167 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-14 02:53:49.283187 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-14 02:53:49.283197 | orchestrator | 2025-05-14 02:53:49.283206 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-14 02:53:49.283216 | orchestrator | Wednesday 14 May 2025 02:49:34 +0000 (0:00:01.329) 0:04:15.767 ********* 2025-05-14 02:53:49.283225 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.283235 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.283245 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.283310 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.283321 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.283330 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.283340 | orchestrator | 2025-05-14 02:53:49.283360 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-14 02:53:49.283370 | orchestrator | Wednesday 14 May 2025 02:49:35 +0000 (0:00:01.131) 0:04:16.899 ********* 2025-05-14 02:53:49.283380 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.283390 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.283399 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.283434 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.283444 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.283454 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.283463 | orchestrator | 2025-05-14 02:53:49.283473 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-14 02:53:49.283482 | orchestrator | Wednesday 14 May 2025 02:49:37 +0000 (0:00:01.805) 0:04:18.704 ********* 2025-05-14 02:53:49.283493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.283548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.283560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.283599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.283611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.283632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.283643 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.283664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.283681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.283692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.283708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.283723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.283733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.283754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.283772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.283799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.283813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.283824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.283835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.283845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.283862 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.283894 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.283915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.283942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.283958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.283975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.283986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.283997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.284067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.284083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284157 | orchestrator | 2025-05-14 02:53:49.284167 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 02:53:49.284177 | orchestrator | Wednesday 14 May 2025 02:49:40 +0000 (0:00:02.760) 0:04:21.464 ********* 2025-05-14 02:53:49.284191 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:53:49.284202 | orchestrator | 2025-05-14 02:53:49.284211 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-14 02:53:49.284221 | orchestrator | Wednesday 14 May 2025 02:49:41 +0000 (0:00:01.420) 0:04:22.884 ********* 2025-05-14 02:53:49.284232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284265 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284322 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.284430 | orchestrator | 2025-05-14 02:53:49.284440 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-14 02:53:49.284450 | orchestrator | Wednesday 14 May 2025 02:49:45 +0000 (0:00:03.831) 0:04:26.716 ********* 2025-05-14 02:53:49.284460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.284476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.284487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284497 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.284508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.284529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.284539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284549 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.284561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.284584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284599 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.284626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.284657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.284686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284703 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.284718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.284734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284749 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.284773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.284791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284823 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.284840 | orchestrator | 2025-05-14 02:53:49.284853 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-14 02:53:49.284868 | orchestrator | Wednesday 14 May 2025 02:49:46 +0000 (0:00:01.375) 0:04:28.092 ********* 2025-05-14 02:53:49.284884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.284910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.284928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.284945 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.284968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.284985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.285012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.285030 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.285125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.285139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.285155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.285166 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.285176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.285193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.285203 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.285213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.285229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.285240 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.285250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.285260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.285270 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.285280 | orchestrator | 2025-05-14 02:53:49.285290 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 02:53:49.285300 | orchestrator | Wednesday 14 May 2025 02:49:49 +0000 (0:00:02.510) 0:04:30.602 ********* 2025-05-14 02:53:49.285310 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.285320 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.285335 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.285349 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:53:49.285359 | orchestrator | 2025-05-14 02:53:49.285370 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-14 02:53:49.285379 | orchestrator | Wednesday 14 May 2025 02:49:50 +0000 (0:00:01.189) 0:04:31.791 ********* 2025-05-14 02:53:49.285389 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:53:49.285399 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 02:53:49.285408 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 02:53:49.285418 | orchestrator | 2025-05-14 02:53:49.285428 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-14 02:53:49.285437 | orchestrator | Wednesday 14 May 2025 02:49:51 +0000 (0:00:00.840) 0:04:32.632 ********* 2025-05-14 02:53:49.285447 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:53:49.285456 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 02:53:49.285466 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 02:53:49.285476 | orchestrator | 2025-05-14 02:53:49.285485 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-14 02:53:49.285495 | orchestrator | Wednesday 14 May 2025 02:49:51 +0000 (0:00:00.729) 0:04:33.361 ********* 2025-05-14 02:53:49.285505 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:53:49.285515 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:53:49.285525 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:53:49.285535 | orchestrator | 2025-05-14 02:53:49.285545 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-14 02:53:49.285554 | orchestrator | Wednesday 14 May 2025 02:49:52 +0000 (0:00:00.639) 0:04:34.001 ********* 2025-05-14 02:53:49.285564 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:53:49.285573 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:53:49.285583 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:53:49.285592 | orchestrator | 2025-05-14 02:53:49.285602 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-14 02:53:49.285611 | orchestrator | Wednesday 14 May 2025 02:49:53 +0000 (0:00:00.487) 0:04:34.488 ********* 2025-05-14 02:53:49.285621 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-14 02:53:49.285630 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-14 02:53:49.285640 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-14 02:53:49.285647 | orchestrator | 2025-05-14 02:53:49.285655 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-14 02:53:49.285663 | orchestrator | Wednesday 14 May 2025 02:49:54 +0000 (0:00:01.406) 0:04:35.895 ********* 2025-05-14 02:53:49.285671 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-14 02:53:49.285678 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-14 02:53:49.285686 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-14 02:53:49.285694 | orchestrator | 2025-05-14 02:53:49.285702 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-14 02:53:49.285709 | orchestrator | Wednesday 14 May 2025 02:49:55 +0000 (0:00:01.329) 0:04:37.224 ********* 2025-05-14 02:53:49.285717 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-14 02:53:49.285725 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-14 02:53:49.285733 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-14 02:53:49.286178 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-14 02:53:49.286198 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-14 02:53:49.286206 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-14 02:53:49.286214 | orchestrator | 2025-05-14 02:53:49.286222 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-14 02:53:49.286230 | orchestrator | Wednesday 14 May 2025 02:50:01 +0000 (0:00:05.296) 0:04:42.521 ********* 2025-05-14 02:53:49.286247 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.286255 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.286263 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.286271 | orchestrator | 2025-05-14 02:53:49.286279 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-14 02:53:49.286287 | orchestrator | Wednesday 14 May 2025 02:50:01 +0000 (0:00:00.480) 0:04:43.001 ********* 2025-05-14 02:53:49.286295 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.286302 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.286310 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.286318 | orchestrator | 2025-05-14 02:53:49.286326 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-14 02:53:49.286334 | orchestrator | Wednesday 14 May 2025 02:50:02 +0000 (0:00:00.481) 0:04:43.482 ********* 2025-05-14 02:53:49.286341 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.286349 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.286357 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.286365 | orchestrator | 2025-05-14 02:53:49.286372 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-14 02:53:49.286380 | orchestrator | Wednesday 14 May 2025 02:50:03 +0000 (0:00:01.229) 0:04:44.712 ********* 2025-05-14 02:53:49.286389 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-14 02:53:49.286397 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-14 02:53:49.286405 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-14 02:53:49.286413 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-14 02:53:49.286427 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-14 02:53:49.286435 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-14 02:53:49.286442 | orchestrator | 2025-05-14 02:53:49.286450 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-14 02:53:49.286458 | orchestrator | Wednesday 14 May 2025 02:50:06 +0000 (0:00:03.270) 0:04:47.983 ********* 2025-05-14 02:53:49.286466 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:53:49.286474 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:53:49.286482 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:53:49.286489 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:53:49.286497 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.286505 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:53:49.286512 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.286520 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:53:49.286528 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.286536 | orchestrator | 2025-05-14 02:53:49.286543 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-14 02:53:49.286551 | orchestrator | Wednesday 14 May 2025 02:50:09 +0000 (0:00:03.320) 0:04:51.304 ********* 2025-05-14 02:53:49.286559 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.286567 | orchestrator | 2025-05-14 02:53:49.286575 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-14 02:53:49.286583 | orchestrator | Wednesday 14 May 2025 02:50:10 +0000 (0:00:00.120) 0:04:51.424 ********* 2025-05-14 02:53:49.286590 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.286598 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.286606 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.286618 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.286626 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.286634 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.286641 | orchestrator | 2025-05-14 02:53:49.286649 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-14 02:53:49.286657 | orchestrator | Wednesday 14 May 2025 02:50:10 +0000 (0:00:00.884) 0:04:52.308 ********* 2025-05-14 02:53:49.286665 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:53:49.286673 | orchestrator | 2025-05-14 02:53:49.286681 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-14 02:53:49.286689 | orchestrator | Wednesday 14 May 2025 02:50:11 +0000 (0:00:00.386) 0:04:52.695 ********* 2025-05-14 02:53:49.286696 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.286706 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.286719 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.286732 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.286744 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.286757 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.286770 | orchestrator | 2025-05-14 02:53:49.286783 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-14 02:53:49.286795 | orchestrator | Wednesday 14 May 2025 02:50:12 +0000 (0:00:00.741) 0:04:53.437 ********* 2025-05-14 02:53:49.286817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.286832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.286853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.286867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.286889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.286911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.286926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.286947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.286962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.286987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287508 | orchestrator | 2025-05-14 02:53:49.287516 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-14 02:53:49.287524 | orchestrator | Wednesday 14 May 2025 02:50:16 +0000 (0:00:04.107) 0:04:57.544 ********* 2025-05-14 02:53:49.287533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.287541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.287554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.287605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.287619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.287670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.287679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.287738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.287747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.287760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.287774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.287786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.287795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.287943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.287951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.287986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.287998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.288006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.288015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.288263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.288292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.288306 | orchestrator | 2025-05-14 02:53:49.288319 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-14 02:53:49.288331 | orchestrator | Wednesday 14 May 2025 02:50:23 +0000 (0:00:07.625) 0:05:05.169 ********* 2025-05-14 02:53:49.288343 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.288355 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.288364 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.288374 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.288384 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.288394 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.288404 | orchestrator | 2025-05-14 02:53:49.288413 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-14 02:53:49.288423 | orchestrator | Wednesday 14 May 2025 02:50:25 +0000 (0:00:01.723) 0:05:06.893 ********* 2025-05-14 02:53:49.288433 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-14 02:53:49.288452 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-14 02:53:49.288462 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-14 02:53:49.288472 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.288484 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-14 02:53:49.288494 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-14 02:53:49.288505 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.288516 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-14 02:53:49.288527 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-14 02:53:49.288535 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-14 02:53:49.288542 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.288548 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-14 02:53:49.288555 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-14 02:53:49.288562 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-14 02:53:49.288568 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-14 02:53:49.288584 | orchestrator | 2025-05-14 02:53:49.288591 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-14 02:53:49.288597 | orchestrator | Wednesday 14 May 2025 02:50:30 +0000 (0:00:05.186) 0:05:12.079 ********* 2025-05-14 02:53:49.288604 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.288611 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.288617 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.288624 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.288631 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.288637 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.288644 | orchestrator | 2025-05-14 02:53:49.288651 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-14 02:53:49.288657 | orchestrator | Wednesday 14 May 2025 02:50:31 +0000 (0:00:00.946) 0:05:13.025 ********* 2025-05-14 02:53:49.288664 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-14 02:53:49.288671 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-14 02:53:49.288677 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-14 02:53:49.288684 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-14 02:53:49.288722 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-14 02:53:49.288730 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-14 02:53:49.288737 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-14 02:53:49.288744 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-14 02:53:49.288750 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-14 02:53:49.288757 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-14 02:53:49.288764 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.288771 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-14 02:53:49.288777 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.288784 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-14 02:53:49.288791 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.288797 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:53:49.288804 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:53:49.288811 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:53:49.288818 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:53:49.288826 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:53:49.288834 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:53:49.288841 | orchestrator | 2025-05-14 02:53:49.288850 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-14 02:53:49.288861 | orchestrator | Wednesday 14 May 2025 02:50:39 +0000 (0:00:07.632) 0:05:20.658 ********* 2025-05-14 02:53:49.288869 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:53:49.288882 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:53:49.288889 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-14 02:53:49.288897 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:53:49.288905 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-14 02:53:49.288913 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:53:49.288920 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-14 02:53:49.288927 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:53:49.288935 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:53:49.288942 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:53:49.288949 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-14 02:53:49.288957 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.288964 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:53:49.288972 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:53:49.288980 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-14 02:53:49.288987 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.288995 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-14 02:53:49.289003 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.289010 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:53:49.289018 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:53:49.289025 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:53:49.289079 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:53:49.289089 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:53:49.289097 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:53:49.289104 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:53:49.289112 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:53:49.289142 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:53:49.289151 | orchestrator | 2025-05-14 02:53:49.289158 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-14 02:53:49.289166 | orchestrator | Wednesday 14 May 2025 02:50:49 +0000 (0:00:10.761) 0:05:31.420 ********* 2025-05-14 02:53:49.289174 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.289182 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.289189 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.289196 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.289203 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.289209 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.289216 | orchestrator | 2025-05-14 02:53:49.289223 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-14 02:53:49.289229 | orchestrator | Wednesday 14 May 2025 02:50:50 +0000 (0:00:00.757) 0:05:32.178 ********* 2025-05-14 02:53:49.289236 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.289243 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.289258 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.289264 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.289271 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.289278 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.289284 | orchestrator | 2025-05-14 02:53:49.289291 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-14 02:53:49.289298 | orchestrator | Wednesday 14 May 2025 02:50:51 +0000 (0:00:00.920) 0:05:33.098 ********* 2025-05-14 02:53:49.289305 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.289311 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.289318 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.289324 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.289331 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.289338 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.289344 | orchestrator | 2025-05-14 02:53:49.289351 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-14 02:53:49.289358 | orchestrator | Wednesday 14 May 2025 02:50:54 +0000 (0:00:02.874) 0:05:35.973 ********* 2025-05-14 02:53:49.289369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.289378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.289385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.289413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.289445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.289452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289505 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.289512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.289529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.289575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289583 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.289590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.289597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.289622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289651 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.289658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.289668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.289676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.289683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.289700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.289732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.289763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289784 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.289794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289813 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.289821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.289831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.289839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.289857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.289865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.289895 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.289902 | orchestrator | 2025-05-14 02:53:49.289909 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-14 02:53:49.289916 | orchestrator | Wednesday 14 May 2025 02:50:56 +0000 (0:00:01.943) 0:05:37.916 ********* 2025-05-14 02:53:49.289923 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-14 02:53:49.289930 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-14 02:53:49.289936 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.289943 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-14 02:53:49.289950 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-14 02:53:49.289956 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.289963 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-14 02:53:49.289969 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-14 02:53:49.289976 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.289982 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-14 02:53:49.289989 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-14 02:53:49.289996 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.290002 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-14 02:53:49.290009 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-14 02:53:49.290038 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.290061 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-14 02:53:49.290068 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-14 02:53:49.290074 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.290081 | orchestrator | 2025-05-14 02:53:49.290088 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-14 02:53:49.290094 | orchestrator | Wednesday 14 May 2025 02:50:57 +0000 (0:00:01.063) 0:05:38.979 ********* 2025-05-14 02:53:49.290105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.290117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.290125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.290137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.290144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:53:49.290155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:53:49.290162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.290210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.290222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.290236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.290247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.290276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.290283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.290290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.290298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.290309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.290340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.290347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.290354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.290380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:53:49.290387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:53:49.290401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:53:49.290537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:53:49.290544 | orchestrator | 2025-05-14 02:53:49.290551 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 02:53:49.290558 | orchestrator | Wednesday 14 May 2025 02:51:01 +0000 (0:00:03.456) 0:05:42.436 ********* 2025-05-14 02:53:49.290564 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.290571 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.290578 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.290585 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.290591 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.290598 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.290604 | orchestrator | 2025-05-14 02:53:49.290615 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:53:49.290622 | orchestrator | Wednesday 14 May 2025 02:51:01 +0000 (0:00:00.914) 0:05:43.350 ********* 2025-05-14 02:53:49.290628 | orchestrator | 2025-05-14 02:53:49.290635 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:53:49.290642 | orchestrator | Wednesday 14 May 2025 02:51:02 +0000 (0:00:00.123) 0:05:43.474 ********* 2025-05-14 02:53:49.290648 | orchestrator | 2025-05-14 02:53:49.290655 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:53:49.290661 | orchestrator | Wednesday 14 May 2025 02:51:02 +0000 (0:00:00.330) 0:05:43.804 ********* 2025-05-14 02:53:49.290668 | orchestrator | 2025-05-14 02:53:49.290675 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:53:49.290681 | orchestrator | Wednesday 14 May 2025 02:51:02 +0000 (0:00:00.114) 0:05:43.919 ********* 2025-05-14 02:53:49.290688 | orchestrator | 2025-05-14 02:53:49.290694 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:53:49.290701 | orchestrator | Wednesday 14 May 2025 02:51:02 +0000 (0:00:00.308) 0:05:44.227 ********* 2025-05-14 02:53:49.290708 | orchestrator | 2025-05-14 02:53:49.290714 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:53:49.290721 | orchestrator | Wednesday 14 May 2025 02:51:02 +0000 (0:00:00.113) 0:05:44.341 ********* 2025-05-14 02:53:49.290728 | orchestrator | 2025-05-14 02:53:49.290734 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-14 02:53:49.290741 | orchestrator | Wednesday 14 May 2025 02:51:03 +0000 (0:00:00.310) 0:05:44.651 ********* 2025-05-14 02:53:49.290747 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:53:49.290754 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.290761 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:53:49.290767 | orchestrator | 2025-05-14 02:53:49.290774 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-14 02:53:49.290783 | orchestrator | Wednesday 14 May 2025 02:51:15 +0000 (0:00:12.384) 0:05:57.036 ********* 2025-05-14 02:53:49.290790 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.290797 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:53:49.290803 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:53:49.290810 | orchestrator | 2025-05-14 02:53:49.290817 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-14 02:53:49.290823 | orchestrator | Wednesday 14 May 2025 02:51:27 +0000 (0:00:11.635) 0:06:08.671 ********* 2025-05-14 02:53:49.290830 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.290836 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.290843 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.290850 | orchestrator | 2025-05-14 02:53:49.290856 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-14 02:53:49.290863 | orchestrator | Wednesday 14 May 2025 02:51:50 +0000 (0:00:22.853) 0:06:31.525 ********* 2025-05-14 02:53:49.290870 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.290876 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.290883 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.290889 | orchestrator | 2025-05-14 02:53:49.290896 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-14 02:53:49.290903 | orchestrator | Wednesday 14 May 2025 02:52:17 +0000 (0:00:27.055) 0:06:58.580 ********* 2025-05-14 02:53:49.290909 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.290916 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.290923 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.290929 | orchestrator | 2025-05-14 02:53:49.290936 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-14 02:53:49.290942 | orchestrator | Wednesday 14 May 2025 02:52:17 +0000 (0:00:00.778) 0:06:59.358 ********* 2025-05-14 02:53:49.290949 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.290956 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.290966 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.290973 | orchestrator | 2025-05-14 02:53:49.290979 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-14 02:53:49.290986 | orchestrator | Wednesday 14 May 2025 02:52:18 +0000 (0:00:00.952) 0:07:00.311 ********* 2025-05-14 02:53:49.290993 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:53:49.290999 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:53:49.291006 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:53:49.291012 | orchestrator | 2025-05-14 02:53:49.291019 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-14 02:53:49.291026 | orchestrator | Wednesday 14 May 2025 02:52:40 +0000 (0:00:21.523) 0:07:21.834 ********* 2025-05-14 02:53:49.291032 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.291039 | orchestrator | 2025-05-14 02:53:49.291088 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-14 02:53:49.291096 | orchestrator | Wednesday 14 May 2025 02:52:40 +0000 (0:00:00.107) 0:07:21.942 ********* 2025-05-14 02:53:49.291103 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.291109 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.291116 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.291123 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.291129 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.291135 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-14 02:53:49.291141 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:53:49.291148 | orchestrator | 2025-05-14 02:53:49.291157 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-14 02:53:49.291164 | orchestrator | Wednesday 14 May 2025 02:53:03 +0000 (0:00:22.724) 0:07:44.666 ********* 2025-05-14 02:53:49.291170 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.291176 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.291182 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.291188 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.291194 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.291200 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.291206 | orchestrator | 2025-05-14 02:53:49.291213 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-14 02:53:49.291219 | orchestrator | Wednesday 14 May 2025 02:53:12 +0000 (0:00:09.731) 0:07:54.398 ********* 2025-05-14 02:53:49.291225 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.291231 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.291237 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.291244 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.291250 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.291256 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-05-14 02:53:49.291262 | orchestrator | 2025-05-14 02:53:49.291269 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-14 02:53:49.291275 | orchestrator | Wednesday 14 May 2025 02:53:16 +0000 (0:00:03.370) 0:07:57.768 ********* 2025-05-14 02:53:49.291281 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:53:49.291287 | orchestrator | 2025-05-14 02:53:49.291294 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-14 02:53:49.291300 | orchestrator | Wednesday 14 May 2025 02:53:28 +0000 (0:00:12.382) 0:08:10.150 ********* 2025-05-14 02:53:49.291306 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:53:49.291312 | orchestrator | 2025-05-14 02:53:49.291318 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-14 02:53:49.291324 | orchestrator | Wednesday 14 May 2025 02:53:29 +0000 (0:00:01.130) 0:08:11.281 ********* 2025-05-14 02:53:49.291330 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.291337 | orchestrator | 2025-05-14 02:53:49.291347 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-14 02:53:49.291353 | orchestrator | Wednesday 14 May 2025 02:53:30 +0000 (0:00:01.085) 0:08:12.366 ********* 2025-05-14 02:53:49.291360 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:53:49.291366 | orchestrator | 2025-05-14 02:53:49.291372 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-14 02:53:49.291381 | orchestrator | Wednesday 14 May 2025 02:53:41 +0000 (0:00:10.528) 0:08:22.895 ********* 2025-05-14 02:53:49.291388 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:53:49.291394 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:53:49.291400 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:53:49.291406 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:53:49.291412 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:53:49.291418 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:53:49.291424 | orchestrator | 2025-05-14 02:53:49.291431 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-14 02:53:49.291437 | orchestrator | 2025-05-14 02:53:49.291443 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-14 02:53:49.291449 | orchestrator | Wednesday 14 May 2025 02:53:43 +0000 (0:00:01.863) 0:08:24.758 ********* 2025-05-14 02:53:49.291455 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:53:49.291461 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:53:49.291467 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:53:49.291473 | orchestrator | 2025-05-14 02:53:49.291480 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-14 02:53:49.291486 | orchestrator | 2025-05-14 02:53:49.291492 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-14 02:53:49.291498 | orchestrator | Wednesday 14 May 2025 02:53:44 +0000 (0:00:00.865) 0:08:25.623 ********* 2025-05-14 02:53:49.291504 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.291510 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.291516 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.291522 | orchestrator | 2025-05-14 02:53:49.291529 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-14 02:53:49.291535 | orchestrator | 2025-05-14 02:53:49.291541 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-14 02:53:49.291547 | orchestrator | Wednesday 14 May 2025 02:53:44 +0000 (0:00:00.607) 0:08:26.231 ********* 2025-05-14 02:53:49.291553 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-14 02:53:49.291560 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-14 02:53:49.291566 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-14 02:53:49.291572 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-14 02:53:49.291578 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-14 02:53:49.291585 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-14 02:53:49.291591 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:53:49.291597 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-14 02:53:49.291603 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-14 02:53:49.291609 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-14 02:53:49.291615 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-14 02:53:49.291621 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-14 02:53:49.291628 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-14 02:53:49.291634 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:53:49.291640 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-14 02:53:49.291646 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-14 02:53:49.291652 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-14 02:53:49.291661 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-14 02:53:49.291672 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-14 02:53:49.291678 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-14 02:53:49.291684 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-14 02:53:49.291691 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-14 02:53:49.291697 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-14 02:53:49.291703 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-14 02:53:49.291709 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-14 02:53:49.291715 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-14 02:53:49.291721 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:53:49.291727 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-14 02:53:49.291733 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-14 02:53:49.291740 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-14 02:53:49.291746 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-14 02:53:49.291752 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-14 02:53:49.291758 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-14 02:53:49.291764 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.291770 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.291777 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-14 02:53:49.291783 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-14 02:53:49.291789 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-14 02:53:49.291795 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-14 02:53:49.291801 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-14 02:53:49.291807 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-14 02:53:49.291813 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.291820 | orchestrator | 2025-05-14 02:53:49.291826 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-14 02:53:49.291832 | orchestrator | 2025-05-14 02:53:49.291838 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-14 02:53:49.291844 | orchestrator | Wednesday 14 May 2025 02:53:45 +0000 (0:00:01.066) 0:08:27.298 ********* 2025-05-14 02:53:49.291853 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-14 02:53:49.291860 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-14 02:53:49.291866 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.291872 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-14 02:53:49.291878 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-14 02:53:49.291884 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.291891 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-14 02:53:49.291897 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-14 02:53:49.291903 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.291909 | orchestrator | 2025-05-14 02:53:49.291915 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-14 02:53:49.291921 | orchestrator | 2025-05-14 02:53:49.291928 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-14 02:53:49.291934 | orchestrator | Wednesday 14 May 2025 02:53:46 +0000 (0:00:00.622) 0:08:27.920 ********* 2025-05-14 02:53:49.291940 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.291946 | orchestrator | 2025-05-14 02:53:49.291953 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-14 02:53:49.291959 | orchestrator | 2025-05-14 02:53:49.291965 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-14 02:53:49.291976 | orchestrator | Wednesday 14 May 2025 02:53:47 +0000 (0:00:00.760) 0:08:28.680 ********* 2025-05-14 02:53:49.291982 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:53:49.291988 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:53:49.291994 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:53:49.292000 | orchestrator | 2025-05-14 02:53:49.292006 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:53:49.292013 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:53:49.292020 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-14 02:53:49.292026 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-14 02:53:49.292033 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-14 02:53:49.292039 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-14 02:53:49.292059 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-14 02:53:49.292065 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-14 02:53:49.292072 | orchestrator | 2025-05-14 02:53:49.292078 | orchestrator | 2025-05-14 02:53:49.292088 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:53:49.292094 | orchestrator | Wednesday 14 May 2025 02:53:47 +0000 (0:00:00.509) 0:08:29.190 ********* 2025-05-14 02:53:49.292100 | orchestrator | =============================================================================== 2025-05-14 02:53:49.292107 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.69s 2025-05-14 02:53:49.292113 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 27.06s 2025-05-14 02:53:49.292119 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.85s 2025-05-14 02:53:49.292125 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.72s 2025-05-14 02:53:49.292132 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.52s 2025-05-14 02:53:49.292138 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.90s 2025-05-14 02:53:49.292144 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.90s 2025-05-14 02:53:49.292150 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.23s 2025-05-14 02:53:49.292156 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.49s 2025-05-14 02:53:49.292162 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.38s 2025-05-14 02:53:49.292168 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.38s 2025-05-14 02:53:49.292174 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.64s 2025-05-14 02:53:49.292181 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.19s 2025-05-14 02:53:49.292187 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.17s 2025-05-14 02:53:49.292193 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.76s 2025-05-14 02:53:49.292199 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.76s 2025-05-14 02:53:49.292205 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.53s 2025-05-14 02:53:49.292211 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.39s 2025-05-14 02:53:49.292222 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.73s 2025-05-14 02:53:49.292231 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.57s 2025-05-14 02:53:49.292237 | orchestrator | 2025-05-14 02:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:52.315345 | orchestrator | 2025-05-14 02:53:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:52.315436 | orchestrator | 2025-05-14 02:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:55.356292 | orchestrator | 2025-05-14 02:53:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:55.356402 | orchestrator | 2025-05-14 02:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:58.400891 | orchestrator | 2025-05-14 02:53:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:53:58.401002 | orchestrator | 2025-05-14 02:53:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:01.444783 | orchestrator | 2025-05-14 02:54:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:01.444894 | orchestrator | 2025-05-14 02:54:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:04.487691 | orchestrator | 2025-05-14 02:54:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:04.487819 | orchestrator | 2025-05-14 02:54:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:07.540181 | orchestrator | 2025-05-14 02:54:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:07.540307 | orchestrator | 2025-05-14 02:54:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:10.582183 | orchestrator | 2025-05-14 02:54:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:10.582329 | orchestrator | 2025-05-14 02:54:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:13.629286 | orchestrator | 2025-05-14 02:54:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:13.629391 | orchestrator | 2025-05-14 02:54:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:16.672655 | orchestrator | 2025-05-14 02:54:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:16.672750 | orchestrator | 2025-05-14 02:54:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:19.723382 | orchestrator | 2025-05-14 02:54:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:19.723484 | orchestrator | 2025-05-14 02:54:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:22.769614 | orchestrator | 2025-05-14 02:54:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:22.769713 | orchestrator | 2025-05-14 02:54:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:25.818464 | orchestrator | 2025-05-14 02:54:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:25.818537 | orchestrator | 2025-05-14 02:54:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:28.867141 | orchestrator | 2025-05-14 02:54:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:28.867247 | orchestrator | 2025-05-14 02:54:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:31.916552 | orchestrator | 2025-05-14 02:54:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:31.916674 | orchestrator | 2025-05-14 02:54:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:34.967357 | orchestrator | 2025-05-14 02:54:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:34.967492 | orchestrator | 2025-05-14 02:54:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:38.027622 | orchestrator | 2025-05-14 02:54:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:38.027746 | orchestrator | 2025-05-14 02:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:41.075845 | orchestrator | 2025-05-14 02:54:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:41.075948 | orchestrator | 2025-05-14 02:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:44.114529 | orchestrator | 2025-05-14 02:54:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:44.114637 | orchestrator | 2025-05-14 02:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:47.150204 | orchestrator | 2025-05-14 02:54:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:47.150294 | orchestrator | 2025-05-14 02:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:50.193221 | orchestrator | 2025-05-14 02:54:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:50.193325 | orchestrator | 2025-05-14 02:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:53.237791 | orchestrator | 2025-05-14 02:54:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:53.237917 | orchestrator | 2025-05-14 02:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:56.277716 | orchestrator | 2025-05-14 02:54:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:56.277816 | orchestrator | 2025-05-14 02:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:59.327612 | orchestrator | 2025-05-14 02:54:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:54:59.327742 | orchestrator | 2025-05-14 02:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:02.369705 | orchestrator | 2025-05-14 02:55:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:02.369809 | orchestrator | 2025-05-14 02:55:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:05.417389 | orchestrator | 2025-05-14 02:55:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:05.417465 | orchestrator | 2025-05-14 02:55:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:08.459786 | orchestrator | 2025-05-14 02:55:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:08.459887 | orchestrator | 2025-05-14 02:55:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:11.513308 | orchestrator | 2025-05-14 02:55:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:11.513412 | orchestrator | 2025-05-14 02:55:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:14.563102 | orchestrator | 2025-05-14 02:55:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:14.563203 | orchestrator | 2025-05-14 02:55:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:17.609757 | orchestrator | 2025-05-14 02:55:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:17.609888 | orchestrator | 2025-05-14 02:55:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:20.659120 | orchestrator | 2025-05-14 02:55:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:20.659219 | orchestrator | 2025-05-14 02:55:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:23.708642 | orchestrator | 2025-05-14 02:55:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:23.708745 | orchestrator | 2025-05-14 02:55:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:26.756374 | orchestrator | 2025-05-14 02:55:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:26.756474 | orchestrator | 2025-05-14 02:55:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:29.805158 | orchestrator | 2025-05-14 02:55:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:29.805252 | orchestrator | 2025-05-14 02:55:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:32.858573 | orchestrator | 2025-05-14 02:55:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:32.858705 | orchestrator | 2025-05-14 02:55:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:35.905117 | orchestrator | 2025-05-14 02:55:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:35.905221 | orchestrator | 2025-05-14 02:55:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:38.954241 | orchestrator | 2025-05-14 02:55:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:38.954339 | orchestrator | 2025-05-14 02:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:42.011369 | orchestrator | 2025-05-14 02:55:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:42.011487 | orchestrator | 2025-05-14 02:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:45.065376 | orchestrator | 2025-05-14 02:55:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:45.065480 | orchestrator | 2025-05-14 02:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:48.106268 | orchestrator | 2025-05-14 02:55:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:48.106400 | orchestrator | 2025-05-14 02:55:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:51.142523 | orchestrator | 2025-05-14 02:55:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:51.142597 | orchestrator | 2025-05-14 02:55:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:54.182130 | orchestrator | 2025-05-14 02:55:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:54.182260 | orchestrator | 2025-05-14 02:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:57.219503 | orchestrator | 2025-05-14 02:55:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:55:57.219607 | orchestrator | 2025-05-14 02:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:00.267854 | orchestrator | 2025-05-14 02:56:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:00.268039 | orchestrator | 2025-05-14 02:56:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:03.320137 | orchestrator | 2025-05-14 02:56:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:03.320267 | orchestrator | 2025-05-14 02:56:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:06.364420 | orchestrator | 2025-05-14 02:56:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:06.364521 | orchestrator | 2025-05-14 02:56:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:09.414607 | orchestrator | 2025-05-14 02:56:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:09.414724 | orchestrator | 2025-05-14 02:56:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:12.468083 | orchestrator | 2025-05-14 02:56:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:12.468180 | orchestrator | 2025-05-14 02:56:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:15.514424 | orchestrator | 2025-05-14 02:56:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:15.514535 | orchestrator | 2025-05-14 02:56:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:18.561966 | orchestrator | 2025-05-14 02:56:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:18.562130 | orchestrator | 2025-05-14 02:56:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:21.608751 | orchestrator | 2025-05-14 02:56:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:21.608849 | orchestrator | 2025-05-14 02:56:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:24.661982 | orchestrator | 2025-05-14 02:56:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:24.662118 | orchestrator | 2025-05-14 02:56:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:27.706267 | orchestrator | 2025-05-14 02:56:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:27.706413 | orchestrator | 2025-05-14 02:56:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:30.751326 | orchestrator | 2025-05-14 02:56:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:30.751450 | orchestrator | 2025-05-14 02:56:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:33.796785 | orchestrator | 2025-05-14 02:56:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:33.796927 | orchestrator | 2025-05-14 02:56:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:36.847178 | orchestrator | 2025-05-14 02:56:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:36.847309 | orchestrator | 2025-05-14 02:56:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:39.898818 | orchestrator | 2025-05-14 02:56:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:39.898982 | orchestrator | 2025-05-14 02:56:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:42.946380 | orchestrator | 2025-05-14 02:56:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:42.946480 | orchestrator | 2025-05-14 02:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:45.996974 | orchestrator | 2025-05-14 02:56:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:45.997074 | orchestrator | 2025-05-14 02:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:49.050386 | orchestrator | 2025-05-14 02:56:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:49.050545 | orchestrator | 2025-05-14 02:56:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:52.093455 | orchestrator | 2025-05-14 02:56:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:52.093554 | orchestrator | 2025-05-14 02:56:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:55.131078 | orchestrator | 2025-05-14 02:56:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:55.131195 | orchestrator | 2025-05-14 02:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:58.178765 | orchestrator | 2025-05-14 02:56:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:56:58.178863 | orchestrator | 2025-05-14 02:56:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:01.231479 | orchestrator | 2025-05-14 02:57:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:01.231572 | orchestrator | 2025-05-14 02:57:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:04.282371 | orchestrator | 2025-05-14 02:57:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:04.282476 | orchestrator | 2025-05-14 02:57:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:07.333033 | orchestrator | 2025-05-14 02:57:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:07.333099 | orchestrator | 2025-05-14 02:57:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:10.376213 | orchestrator | 2025-05-14 02:57:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:10.376346 | orchestrator | 2025-05-14 02:57:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:13.424430 | orchestrator | 2025-05-14 02:57:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:13.424528 | orchestrator | 2025-05-14 02:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:16.466669 | orchestrator | 2025-05-14 02:57:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:16.466781 | orchestrator | 2025-05-14 02:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:19.513110 | orchestrator | 2025-05-14 02:57:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:19.513214 | orchestrator | 2025-05-14 02:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:22.552555 | orchestrator | 2025-05-14 02:57:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:22.552683 | orchestrator | 2025-05-14 02:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:25.602291 | orchestrator | 2025-05-14 02:57:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:25.602395 | orchestrator | 2025-05-14 02:57:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:28.651740 | orchestrator | 2025-05-14 02:57:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:28.651844 | orchestrator | 2025-05-14 02:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:31.693287 | orchestrator | 2025-05-14 02:57:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:31.693384 | orchestrator | 2025-05-14 02:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:34.749116 | orchestrator | 2025-05-14 02:57:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:34.749245 | orchestrator | 2025-05-14 02:57:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:37.800101 | orchestrator | 2025-05-14 02:57:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:37.800180 | orchestrator | 2025-05-14 02:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:40.854157 | orchestrator | 2025-05-14 02:57:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:40.854282 | orchestrator | 2025-05-14 02:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:43.900591 | orchestrator | 2025-05-14 02:57:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:43.900687 | orchestrator | 2025-05-14 02:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:46.952019 | orchestrator | 2025-05-14 02:57:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:46.952154 | orchestrator | 2025-05-14 02:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:49.997768 | orchestrator | 2025-05-14 02:57:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:49.997958 | orchestrator | 2025-05-14 02:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:53.040546 | orchestrator | 2025-05-14 02:57:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:53.040662 | orchestrator | 2025-05-14 02:57:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:56.079155 | orchestrator | 2025-05-14 02:57:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:56.079235 | orchestrator | 2025-05-14 02:57:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:59.119303 | orchestrator | 2025-05-14 02:57:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:57:59.119450 | orchestrator | 2025-05-14 02:57:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:02.162831 | orchestrator | 2025-05-14 02:58:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:02.162979 | orchestrator | 2025-05-14 02:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:05.199390 | orchestrator | 2025-05-14 02:58:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:05.199491 | orchestrator | 2025-05-14 02:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:08.246405 | orchestrator | 2025-05-14 02:58:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:08.246517 | orchestrator | 2025-05-14 02:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:11.294965 | orchestrator | 2025-05-14 02:58:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:11.295076 | orchestrator | 2025-05-14 02:58:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:14.341734 | orchestrator | 2025-05-14 02:58:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:14.341812 | orchestrator | 2025-05-14 02:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:17.385055 | orchestrator | 2025-05-14 02:58:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:17.385149 | orchestrator | 2025-05-14 02:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:20.434986 | orchestrator | 2025-05-14 02:58:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:20.435205 | orchestrator | 2025-05-14 02:58:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:23.476882 | orchestrator | 2025-05-14 02:58:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:23.477005 | orchestrator | 2025-05-14 02:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:26.522213 | orchestrator | 2025-05-14 02:58:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:26.522322 | orchestrator | 2025-05-14 02:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:29.574590 | orchestrator | 2025-05-14 02:58:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:29.574715 | orchestrator | 2025-05-14 02:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:32.625887 | orchestrator | 2025-05-14 02:58:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:32.625976 | orchestrator | 2025-05-14 02:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:35.673554 | orchestrator | 2025-05-14 02:58:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:35.673680 | orchestrator | 2025-05-14 02:58:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:38.722158 | orchestrator | 2025-05-14 02:58:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:38.722263 | orchestrator | 2025-05-14 02:58:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:41.768004 | orchestrator | 2025-05-14 02:58:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:41.768135 | orchestrator | 2025-05-14 02:58:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:44.816702 | orchestrator | 2025-05-14 02:58:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:44.816805 | orchestrator | 2025-05-14 02:58:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:47.868456 | orchestrator | 2025-05-14 02:58:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:47.868558 | orchestrator | 2025-05-14 02:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:50.913394 | orchestrator | 2025-05-14 02:58:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:50.913460 | orchestrator | 2025-05-14 02:58:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:53.953749 | orchestrator | 2025-05-14 02:58:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:53.953900 | orchestrator | 2025-05-14 02:58:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:56.999998 | orchestrator | 2025-05-14 02:58:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:58:57.000120 | orchestrator | 2025-05-14 02:58:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:00.043398 | orchestrator | 2025-05-14 02:59:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:00.043465 | orchestrator | 2025-05-14 02:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:03.088701 | orchestrator | 2025-05-14 02:59:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:03.088795 | orchestrator | 2025-05-14 02:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:06.128735 | orchestrator | 2025-05-14 02:59:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:06.128842 | orchestrator | 2025-05-14 02:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:09.173528 | orchestrator | 2025-05-14 02:59:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:09.173592 | orchestrator | 2025-05-14 02:59:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:12.219581 | orchestrator | 2025-05-14 02:59:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:12.219664 | orchestrator | 2025-05-14 02:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:15.270280 | orchestrator | 2025-05-14 02:59:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:15.270356 | orchestrator | 2025-05-14 02:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:18.319399 | orchestrator | 2025-05-14 02:59:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:18.320202 | orchestrator | 2025-05-14 02:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:21.365324 | orchestrator | 2025-05-14 02:59:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:21.365413 | orchestrator | 2025-05-14 02:59:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:24.418705 | orchestrator | 2025-05-14 02:59:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:24.418790 | orchestrator | 2025-05-14 02:59:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:27.472361 | orchestrator | 2025-05-14 02:59:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:27.472437 | orchestrator | 2025-05-14 02:59:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:30.521731 | orchestrator | 2025-05-14 02:59:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:30.521851 | orchestrator | 2025-05-14 02:59:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:33.567948 | orchestrator | 2025-05-14 02:59:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:33.568029 | orchestrator | 2025-05-14 02:59:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:36.616184 | orchestrator | 2025-05-14 02:59:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:36.616285 | orchestrator | 2025-05-14 02:59:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:39.666422 | orchestrator | 2025-05-14 02:59:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:39.666513 | orchestrator | 2025-05-14 02:59:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:42.713691 | orchestrator | 2025-05-14 02:59:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:42.713791 | orchestrator | 2025-05-14 02:59:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:45.756141 | orchestrator | 2025-05-14 02:59:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:45.756210 | orchestrator | 2025-05-14 02:59:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:48.804196 | orchestrator | 2025-05-14 02:59:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:48.804272 | orchestrator | 2025-05-14 02:59:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:51.850771 | orchestrator | 2025-05-14 02:59:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:51.850910 | orchestrator | 2025-05-14 02:59:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:54.898271 | orchestrator | 2025-05-14 02:59:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:54.898351 | orchestrator | 2025-05-14 02:59:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:57.952013 | orchestrator | 2025-05-14 02:59:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 02:59:57.952053 | orchestrator | 2025-05-14 02:59:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:01.004551 | orchestrator | 2025-05-14 03:00:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:01.004656 | orchestrator | 2025-05-14 03:00:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:04.062780 | orchestrator | 2025-05-14 03:00:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:04.062907 | orchestrator | 2025-05-14 03:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:07.108949 | orchestrator | 2025-05-14 03:00:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:07.109051 | orchestrator | 2025-05-14 03:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:10.160885 | orchestrator | 2025-05-14 03:00:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:10.160968 | orchestrator | 2025-05-14 03:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:13.197125 | orchestrator | 2025-05-14 03:00:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:13.197218 | orchestrator | 2025-05-14 03:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:16.249709 | orchestrator | 2025-05-14 03:00:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:16.249873 | orchestrator | 2025-05-14 03:00:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:19.296707 | orchestrator | 2025-05-14 03:00:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:19.296859 | orchestrator | 2025-05-14 03:00:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:22.338987 | orchestrator | 2025-05-14 03:00:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:22.339105 | orchestrator | 2025-05-14 03:00:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:25.382072 | orchestrator | 2025-05-14 03:00:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:25.382154 | orchestrator | 2025-05-14 03:00:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:28.421004 | orchestrator | 2025-05-14 03:00:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:28.421095 | orchestrator | 2025-05-14 03:00:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:31.463660 | orchestrator | 2025-05-14 03:00:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:31.463781 | orchestrator | 2025-05-14 03:00:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:34.516640 | orchestrator | 2025-05-14 03:00:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:34.516697 | orchestrator | 2025-05-14 03:00:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:37.567584 | orchestrator | 2025-05-14 03:00:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:37.567660 | orchestrator | 2025-05-14 03:00:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:40.613751 | orchestrator | 2025-05-14 03:00:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:40.613900 | orchestrator | 2025-05-14 03:00:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:43.659057 | orchestrator | 2025-05-14 03:00:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:43.659182 | orchestrator | 2025-05-14 03:00:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:46.708452 | orchestrator | 2025-05-14 03:00:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:46.708630 | orchestrator | 2025-05-14 03:00:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:49.762269 | orchestrator | 2025-05-14 03:00:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:49.762335 | orchestrator | 2025-05-14 03:00:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:52.820917 | orchestrator | 2025-05-14 03:00:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:52.821022 | orchestrator | 2025-05-14 03:00:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:55.868584 | orchestrator | 2025-05-14 03:00:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:55.868672 | orchestrator | 2025-05-14 03:00:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:58.917456 | orchestrator | 2025-05-14 03:00:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:00:58.917525 | orchestrator | 2025-05-14 03:00:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:01.960423 | orchestrator | 2025-05-14 03:01:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:01.960498 | orchestrator | 2025-05-14 03:01:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:05.014103 | orchestrator | 2025-05-14 03:01:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:05.014237 | orchestrator | 2025-05-14 03:01:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:08.063159 | orchestrator | 2025-05-14 03:01:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:08.063269 | orchestrator | 2025-05-14 03:01:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:11.106765 | orchestrator | 2025-05-14 03:01:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:11.106929 | orchestrator | 2025-05-14 03:01:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:14.145167 | orchestrator | 2025-05-14 03:01:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:14.145259 | orchestrator | 2025-05-14 03:01:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:17.197542 | orchestrator | 2025-05-14 03:01:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:17.197631 | orchestrator | 2025-05-14 03:01:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:20.253896 | orchestrator | 2025-05-14 03:01:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:20.254004 | orchestrator | 2025-05-14 03:01:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:23.301740 | orchestrator | 2025-05-14 03:01:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:23.301903 | orchestrator | 2025-05-14 03:01:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:26.354218 | orchestrator | 2025-05-14 03:01:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:26.354291 | orchestrator | 2025-05-14 03:01:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:29.410442 | orchestrator | 2025-05-14 03:01:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:29.410523 | orchestrator | 2025-05-14 03:01:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:32.462337 | orchestrator | 2025-05-14 03:01:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:32.462456 | orchestrator | 2025-05-14 03:01:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:35.510183 | orchestrator | 2025-05-14 03:01:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:35.510297 | orchestrator | 2025-05-14 03:01:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:38.564741 | orchestrator | 2025-05-14 03:01:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:38.564825 | orchestrator | 2025-05-14 03:01:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:41.610124 | orchestrator | 2025-05-14 03:01:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:41.610163 | orchestrator | 2025-05-14 03:01:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:44.666324 | orchestrator | 2025-05-14 03:01:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:44.666431 | orchestrator | 2025-05-14 03:01:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:47.719888 | orchestrator | 2025-05-14 03:01:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:47.719982 | orchestrator | 2025-05-14 03:01:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:50.773240 | orchestrator | 2025-05-14 03:01:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:50.773337 | orchestrator | 2025-05-14 03:01:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:53.824172 | orchestrator | 2025-05-14 03:01:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:53.824267 | orchestrator | 2025-05-14 03:01:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:56.873043 | orchestrator | 2025-05-14 03:01:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:56.873144 | orchestrator | 2025-05-14 03:01:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:59.913701 | orchestrator | 2025-05-14 03:01:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:01:59.913787 | orchestrator | 2025-05-14 03:01:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:02.965160 | orchestrator | 2025-05-14 03:02:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:02.965235 | orchestrator | 2025-05-14 03:02:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:06.016961 | orchestrator | 2025-05-14 03:02:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:06.017050 | orchestrator | 2025-05-14 03:02:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:09.062443 | orchestrator | 2025-05-14 03:02:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:09.062572 | orchestrator | 2025-05-14 03:02:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:12.109929 | orchestrator | 2025-05-14 03:02:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:12.110003 | orchestrator | 2025-05-14 03:02:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:15.150676 | orchestrator | 2025-05-14 03:02:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:15.150756 | orchestrator | 2025-05-14 03:02:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:18.192904 | orchestrator | 2025-05-14 03:02:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:18.192980 | orchestrator | 2025-05-14 03:02:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:21.238605 | orchestrator | 2025-05-14 03:02:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:21.238705 | orchestrator | 2025-05-14 03:02:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:24.286110 | orchestrator | 2025-05-14 03:02:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:24.286201 | orchestrator | 2025-05-14 03:02:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:27.338530 | orchestrator | 2025-05-14 03:02:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:27.338651 | orchestrator | 2025-05-14 03:02:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:30.400793 | orchestrator | 2025-05-14 03:02:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:30.401885 | orchestrator | 2025-05-14 03:02:30 | INFO  | Task 7b2d0b52-c622-4989-8148-97f1f8f516eb is in state STARTED 2025-05-14 03:02:30.401946 | orchestrator | 2025-05-14 03:02:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:33.461246 | orchestrator | 2025-05-14 03:02:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:33.461338 | orchestrator | 2025-05-14 03:02:33 | INFO  | Task 7b2d0b52-c622-4989-8148-97f1f8f516eb is in state STARTED 2025-05-14 03:02:33.461348 | orchestrator | 2025-05-14 03:02:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:36.521107 | orchestrator | 2025-05-14 03:02:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:36.521830 | orchestrator | 2025-05-14 03:02:36 | INFO  | Task 7b2d0b52-c622-4989-8148-97f1f8f516eb is in state STARTED 2025-05-14 03:02:36.521878 | orchestrator | 2025-05-14 03:02:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:39.572881 | orchestrator | 2025-05-14 03:02:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:39.573652 | orchestrator | 2025-05-14 03:02:39 | INFO  | Task 7b2d0b52-c622-4989-8148-97f1f8f516eb is in state SUCCESS 2025-05-14 03:02:39.573721 | orchestrator | 2025-05-14 03:02:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:42.621965 | orchestrator | 2025-05-14 03:02:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:42.622133 | orchestrator | 2025-05-14 03:02:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:45.673907 | orchestrator | 2025-05-14 03:02:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:45.673992 | orchestrator | 2025-05-14 03:02:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:48.721119 | orchestrator | 2025-05-14 03:02:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:48.721215 | orchestrator | 2025-05-14 03:02:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:51.774486 | orchestrator | 2025-05-14 03:02:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:51.774582 | orchestrator | 2025-05-14 03:02:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:54.826496 | orchestrator | 2025-05-14 03:02:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:54.826596 | orchestrator | 2025-05-14 03:02:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:57.875920 | orchestrator | 2025-05-14 03:02:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:02:57.876019 | orchestrator | 2025-05-14 03:02:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:00.925056 | orchestrator | 2025-05-14 03:03:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:00.925155 | orchestrator | 2025-05-14 03:03:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:03.975867 | orchestrator | 2025-05-14 03:03:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:03.975960 | orchestrator | 2025-05-14 03:03:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:07.022751 | orchestrator | 2025-05-14 03:03:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:07.022882 | orchestrator | 2025-05-14 03:03:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:10.070181 | orchestrator | 2025-05-14 03:03:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:10.070272 | orchestrator | 2025-05-14 03:03:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:13.118668 | orchestrator | 2025-05-14 03:03:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:13.118788 | orchestrator | 2025-05-14 03:03:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:16.155135 | orchestrator | 2025-05-14 03:03:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:16.155236 | orchestrator | 2025-05-14 03:03:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:19.201485 | orchestrator | 2025-05-14 03:03:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:19.201634 | orchestrator | 2025-05-14 03:03:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:22.242153 | orchestrator | 2025-05-14 03:03:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:22.242280 | orchestrator | 2025-05-14 03:03:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:25.285905 | orchestrator | 2025-05-14 03:03:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:25.286089 | orchestrator | 2025-05-14 03:03:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:28.335079 | orchestrator | 2025-05-14 03:03:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:28.335175 | orchestrator | 2025-05-14 03:03:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:31.378568 | orchestrator | 2025-05-14 03:03:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:31.378657 | orchestrator | 2025-05-14 03:03:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:34.426095 | orchestrator | 2025-05-14 03:03:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:34.426149 | orchestrator | 2025-05-14 03:03:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:37.485235 | orchestrator | 2025-05-14 03:03:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:37.485305 | orchestrator | 2025-05-14 03:03:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:40.528373 | orchestrator | 2025-05-14 03:03:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:40.528478 | orchestrator | 2025-05-14 03:03:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:43.577941 | orchestrator | 2025-05-14 03:03:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:43.578104 | orchestrator | 2025-05-14 03:03:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:46.631190 | orchestrator | 2025-05-14 03:03:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:46.631271 | orchestrator | 2025-05-14 03:03:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:49.680317 | orchestrator | 2025-05-14 03:03:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:49.680408 | orchestrator | 2025-05-14 03:03:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:52.733488 | orchestrator | 2025-05-14 03:03:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:52.733588 | orchestrator | 2025-05-14 03:03:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:55.781758 | orchestrator | 2025-05-14 03:03:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:55.781856 | orchestrator | 2025-05-14 03:03:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:58.827301 | orchestrator | 2025-05-14 03:03:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:03:58.827399 | orchestrator | 2025-05-14 03:03:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:01.881423 | orchestrator | 2025-05-14 03:04:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:01.881533 | orchestrator | 2025-05-14 03:04:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:04.932165 | orchestrator | 2025-05-14 03:04:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:04.932269 | orchestrator | 2025-05-14 03:04:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:07.985209 | orchestrator | 2025-05-14 03:04:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:07.985291 | orchestrator | 2025-05-14 03:04:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:11.042291 | orchestrator | 2025-05-14 03:04:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:11.042344 | orchestrator | 2025-05-14 03:04:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:14.091158 | orchestrator | 2025-05-14 03:04:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:14.091213 | orchestrator | 2025-05-14 03:04:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:17.138460 | orchestrator | 2025-05-14 03:04:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:17.138545 | orchestrator | 2025-05-14 03:04:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:20.193427 | orchestrator | 2025-05-14 03:04:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:20.193518 | orchestrator | 2025-05-14 03:04:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:23.243315 | orchestrator | 2025-05-14 03:04:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:23.243412 | orchestrator | 2025-05-14 03:04:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:26.282266 | orchestrator | 2025-05-14 03:04:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:26.282386 | orchestrator | 2025-05-14 03:04:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:29.333034 | orchestrator | 2025-05-14 03:04:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:29.333131 | orchestrator | 2025-05-14 03:04:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:32.379303 | orchestrator | 2025-05-14 03:04:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:32.379393 | orchestrator | 2025-05-14 03:04:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:35.428738 | orchestrator | 2025-05-14 03:04:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:35.428885 | orchestrator | 2025-05-14 03:04:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:38.472608 | orchestrator | 2025-05-14 03:04:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:38.472703 | orchestrator | 2025-05-14 03:04:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:41.513269 | orchestrator | 2025-05-14 03:04:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:41.513364 | orchestrator | 2025-05-14 03:04:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:44.565299 | orchestrator | 2025-05-14 03:04:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:44.565399 | orchestrator | 2025-05-14 03:04:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:47.617066 | orchestrator | 2025-05-14 03:04:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:47.617166 | orchestrator | 2025-05-14 03:04:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:50.664844 | orchestrator | 2025-05-14 03:04:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:50.664939 | orchestrator | 2025-05-14 03:04:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:53.719529 | orchestrator | 2025-05-14 03:04:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:53.719638 | orchestrator | 2025-05-14 03:04:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:56.775189 | orchestrator | 2025-05-14 03:04:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:56.775264 | orchestrator | 2025-05-14 03:04:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:59.828362 | orchestrator | 2025-05-14 03:04:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:04:59.828455 | orchestrator | 2025-05-14 03:04:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:02.875931 | orchestrator | 2025-05-14 03:05:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:02.876048 | orchestrator | 2025-05-14 03:05:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:05.933993 | orchestrator | 2025-05-14 03:05:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:05.934113 | orchestrator | 2025-05-14 03:05:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:08.991141 | orchestrator | 2025-05-14 03:05:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:08.991237 | orchestrator | 2025-05-14 03:05:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:12.042720 | orchestrator | 2025-05-14 03:05:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:12.042804 | orchestrator | 2025-05-14 03:05:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:15.083051 | orchestrator | 2025-05-14 03:05:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:15.083137 | orchestrator | 2025-05-14 03:05:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:18.125978 | orchestrator | 2025-05-14 03:05:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:18.126140 | orchestrator | 2025-05-14 03:05:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:21.178223 | orchestrator | 2025-05-14 03:05:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:21.178319 | orchestrator | 2025-05-14 03:05:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:24.223300 | orchestrator | 2025-05-14 03:05:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:24.223368 | orchestrator | 2025-05-14 03:05:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:27.272323 | orchestrator | 2025-05-14 03:05:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:27.272430 | orchestrator | 2025-05-14 03:05:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:30.320580 | orchestrator | 2025-05-14 03:05:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:30.320716 | orchestrator | 2025-05-14 03:05:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:33.370310 | orchestrator | 2025-05-14 03:05:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:33.370408 | orchestrator | 2025-05-14 03:05:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:36.425239 | orchestrator | 2025-05-14 03:05:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:36.425314 | orchestrator | 2025-05-14 03:05:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:39.460313 | orchestrator | 2025-05-14 03:05:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:39.460451 | orchestrator | 2025-05-14 03:05:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:42.503652 | orchestrator | 2025-05-14 03:05:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:42.503789 | orchestrator | 2025-05-14 03:05:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:45.553893 | orchestrator | 2025-05-14 03:05:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:45.553991 | orchestrator | 2025-05-14 03:05:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:48.607337 | orchestrator | 2025-05-14 03:05:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:48.607436 | orchestrator | 2025-05-14 03:05:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:51.665195 | orchestrator | 2025-05-14 03:05:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:51.665323 | orchestrator | 2025-05-14 03:05:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:54.721014 | orchestrator | 2025-05-14 03:05:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:54.721126 | orchestrator | 2025-05-14 03:05:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:57.777593 | orchestrator | 2025-05-14 03:05:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:05:57.777716 | orchestrator | 2025-05-14 03:05:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:00.822812 | orchestrator | 2025-05-14 03:06:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:00.822914 | orchestrator | 2025-05-14 03:06:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:03.870663 | orchestrator | 2025-05-14 03:06:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:03.870795 | orchestrator | 2025-05-14 03:06:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:06.923402 | orchestrator | 2025-05-14 03:06:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:06.923537 | orchestrator | 2025-05-14 03:06:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:09.974887 | orchestrator | 2025-05-14 03:06:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:09.975004 | orchestrator | 2025-05-14 03:06:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:13.028441 | orchestrator | 2025-05-14 03:06:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:13.028575 | orchestrator | 2025-05-14 03:06:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:16.072270 | orchestrator | 2025-05-14 03:06:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:16.072345 | orchestrator | 2025-05-14 03:06:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:19.114659 | orchestrator | 2025-05-14 03:06:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:19.114813 | orchestrator | 2025-05-14 03:06:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:22.155600 | orchestrator | 2025-05-14 03:06:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:22.155714 | orchestrator | 2025-05-14 03:06:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:25.203443 | orchestrator | 2025-05-14 03:06:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:25.203580 | orchestrator | 2025-05-14 03:06:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:28.245879 | orchestrator | 2025-05-14 03:06:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:28.245949 | orchestrator | 2025-05-14 03:06:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:31.294489 | orchestrator | 2025-05-14 03:06:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:31.294626 | orchestrator | 2025-05-14 03:06:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:34.343825 | orchestrator | 2025-05-14 03:06:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:34.343926 | orchestrator | 2025-05-14 03:06:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:37.407189 | orchestrator | 2025-05-14 03:06:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:37.407327 | orchestrator | 2025-05-14 03:06:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:40.463091 | orchestrator | 2025-05-14 03:06:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:40.463201 | orchestrator | 2025-05-14 03:06:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:43.515448 | orchestrator | 2025-05-14 03:06:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:43.515543 | orchestrator | 2025-05-14 03:06:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:46.568918 | orchestrator | 2025-05-14 03:06:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:46.568990 | orchestrator | 2025-05-14 03:06:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:49.638342 | orchestrator | 2025-05-14 03:06:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:49.638463 | orchestrator | 2025-05-14 03:06:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:52.688922 | orchestrator | 2025-05-14 03:06:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:52.689022 | orchestrator | 2025-05-14 03:06:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:55.738894 | orchestrator | 2025-05-14 03:06:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:55.739044 | orchestrator | 2025-05-14 03:06:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:58.794120 | orchestrator | 2025-05-14 03:06:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:06:58.794217 | orchestrator | 2025-05-14 03:06:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:01.845093 | orchestrator | 2025-05-14 03:07:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:01.845207 | orchestrator | 2025-05-14 03:07:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:04.895689 | orchestrator | 2025-05-14 03:07:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:04.895839 | orchestrator | 2025-05-14 03:07:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:07.949862 | orchestrator | 2025-05-14 03:07:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:07.949963 | orchestrator | 2025-05-14 03:07:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:11.001099 | orchestrator | 2025-05-14 03:07:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:11.001248 | orchestrator | 2025-05-14 03:07:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:14.062201 | orchestrator | 2025-05-14 03:07:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:14.062273 | orchestrator | 2025-05-14 03:07:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:17.108213 | orchestrator | 2025-05-14 03:07:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:17.108288 | orchestrator | 2025-05-14 03:07:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:20.150489 | orchestrator | 2025-05-14 03:07:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:20.150687 | orchestrator | 2025-05-14 03:07:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:23.186694 | orchestrator | 2025-05-14 03:07:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:23.186982 | orchestrator | 2025-05-14 03:07:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:26.235348 | orchestrator | 2025-05-14 03:07:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:26.235452 | orchestrator | 2025-05-14 03:07:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:29.286192 | orchestrator | 2025-05-14 03:07:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:29.286295 | orchestrator | 2025-05-14 03:07:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:32.334237 | orchestrator | 2025-05-14 03:07:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:32.334341 | orchestrator | 2025-05-14 03:07:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:35.381134 | orchestrator | 2025-05-14 03:07:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:35.381236 | orchestrator | 2025-05-14 03:07:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:38.430666 | orchestrator | 2025-05-14 03:07:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:38.430826 | orchestrator | 2025-05-14 03:07:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:41.479551 | orchestrator | 2025-05-14 03:07:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:41.479649 | orchestrator | 2025-05-14 03:07:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:44.525575 | orchestrator | 2025-05-14 03:07:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:44.525693 | orchestrator | 2025-05-14 03:07:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:47.578304 | orchestrator | 2025-05-14 03:07:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:47.578428 | orchestrator | 2025-05-14 03:07:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:50.628735 | orchestrator | 2025-05-14 03:07:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:50.628915 | orchestrator | 2025-05-14 03:07:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:53.677166 | orchestrator | 2025-05-14 03:07:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:53.677267 | orchestrator | 2025-05-14 03:07:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:56.728365 | orchestrator | 2025-05-14 03:07:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:56.728445 | orchestrator | 2025-05-14 03:07:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:59.774669 | orchestrator | 2025-05-14 03:07:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:07:59.774767 | orchestrator | 2025-05-14 03:07:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:02.813068 | orchestrator | 2025-05-14 03:08:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:02.813184 | orchestrator | 2025-05-14 03:08:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:05.858718 | orchestrator | 2025-05-14 03:08:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:05.858906 | orchestrator | 2025-05-14 03:08:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:08.913712 | orchestrator | 2025-05-14 03:08:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:08.913961 | orchestrator | 2025-05-14 03:08:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:11.961914 | orchestrator | 2025-05-14 03:08:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:11.961996 | orchestrator | 2025-05-14 03:08:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:15.010518 | orchestrator | 2025-05-14 03:08:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:15.010620 | orchestrator | 2025-05-14 03:08:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:18.059617 | orchestrator | 2025-05-14 03:08:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:18.059720 | orchestrator | 2025-05-14 03:08:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:21.101609 | orchestrator | 2025-05-14 03:08:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:21.101701 | orchestrator | 2025-05-14 03:08:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:24.139514 | orchestrator | 2025-05-14 03:08:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:24.139633 | orchestrator | 2025-05-14 03:08:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:27.186339 | orchestrator | 2025-05-14 03:08:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:27.186460 | orchestrator | 2025-05-14 03:08:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:30.237820 | orchestrator | 2025-05-14 03:08:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:30.237917 | orchestrator | 2025-05-14 03:08:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:33.283509 | orchestrator | 2025-05-14 03:08:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:33.283620 | orchestrator | 2025-05-14 03:08:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:36.333126 | orchestrator | 2025-05-14 03:08:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:36.333254 | orchestrator | 2025-05-14 03:08:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:39.388497 | orchestrator | 2025-05-14 03:08:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:39.388601 | orchestrator | 2025-05-14 03:08:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:42.440015 | orchestrator | 2025-05-14 03:08:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:42.440119 | orchestrator | 2025-05-14 03:08:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:45.493700 | orchestrator | 2025-05-14 03:08:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:45.493776 | orchestrator | 2025-05-14 03:08:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:48.546478 | orchestrator | 2025-05-14 03:08:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:48.546561 | orchestrator | 2025-05-14 03:08:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:51.600551 | orchestrator | 2025-05-14 03:08:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:51.600683 | orchestrator | 2025-05-14 03:08:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:54.645674 | orchestrator | 2025-05-14 03:08:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:54.645777 | orchestrator | 2025-05-14 03:08:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:57.693640 | orchestrator | 2025-05-14 03:08:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:08:57.693750 | orchestrator | 2025-05-14 03:08:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:00.738732 | orchestrator | 2025-05-14 03:09:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:00.738933 | orchestrator | 2025-05-14 03:09:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:03.781034 | orchestrator | 2025-05-14 03:09:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:03.781143 | orchestrator | 2025-05-14 03:09:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:06.826398 | orchestrator | 2025-05-14 03:09:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:06.826502 | orchestrator | 2025-05-14 03:09:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:09.879101 | orchestrator | 2025-05-14 03:09:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:09.879218 | orchestrator | 2025-05-14 03:09:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:12.933697 | orchestrator | 2025-05-14 03:09:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:12.933802 | orchestrator | 2025-05-14 03:09:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:15.979426 | orchestrator | 2025-05-14 03:09:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:15.979554 | orchestrator | 2025-05-14 03:09:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:19.023523 | orchestrator | 2025-05-14 03:09:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:19.023636 | orchestrator | 2025-05-14 03:09:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:22.062318 | orchestrator | 2025-05-14 03:09:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:22.062427 | orchestrator | 2025-05-14 03:09:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:25.098611 | orchestrator | 2025-05-14 03:09:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:25.098709 | orchestrator | 2025-05-14 03:09:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:28.143471 | orchestrator | 2025-05-14 03:09:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:28.143579 | orchestrator | 2025-05-14 03:09:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:31.184564 | orchestrator | 2025-05-14 03:09:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:31.184690 | orchestrator | 2025-05-14 03:09:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:34.229000 | orchestrator | 2025-05-14 03:09:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:34.229132 | orchestrator | 2025-05-14 03:09:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:37.270357 | orchestrator | 2025-05-14 03:09:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:37.270588 | orchestrator | 2025-05-14 03:09:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:40.322297 | orchestrator | 2025-05-14 03:09:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:40.322396 | orchestrator | 2025-05-14 03:09:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:43.373399 | orchestrator | 2025-05-14 03:09:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:43.373482 | orchestrator | 2025-05-14 03:09:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:46.423704 | orchestrator | 2025-05-14 03:09:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:46.423868 | orchestrator | 2025-05-14 03:09:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:49.468644 | orchestrator | 2025-05-14 03:09:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:49.468743 | orchestrator | 2025-05-14 03:09:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:52.519913 | orchestrator | 2025-05-14 03:09:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:52.519997 | orchestrator | 2025-05-14 03:09:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:55.566330 | orchestrator | 2025-05-14 03:09:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:55.566402 | orchestrator | 2025-05-14 03:09:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:58.615289 | orchestrator | 2025-05-14 03:09:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:09:58.615395 | orchestrator | 2025-05-14 03:09:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:01.659586 | orchestrator | 2025-05-14 03:10:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:01.659712 | orchestrator | 2025-05-14 03:10:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:04.707184 | orchestrator | 2025-05-14 03:10:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:04.707292 | orchestrator | 2025-05-14 03:10:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:07.757058 | orchestrator | 2025-05-14 03:10:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:07.757166 | orchestrator | 2025-05-14 03:10:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:10.806355 | orchestrator | 2025-05-14 03:10:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:10.806477 | orchestrator | 2025-05-14 03:10:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:13.859755 | orchestrator | 2025-05-14 03:10:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:13.859944 | orchestrator | 2025-05-14 03:10:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:16.904682 | orchestrator | 2025-05-14 03:10:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:16.904789 | orchestrator | 2025-05-14 03:10:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:19.945247 | orchestrator | 2025-05-14 03:10:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:19.945347 | orchestrator | 2025-05-14 03:10:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:22.994066 | orchestrator | 2025-05-14 03:10:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:22.994175 | orchestrator | 2025-05-14 03:10:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:26.048948 | orchestrator | 2025-05-14 03:10:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:26.049048 | orchestrator | 2025-05-14 03:10:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:29.103775 | orchestrator | 2025-05-14 03:10:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:29.103929 | orchestrator | 2025-05-14 03:10:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:32.145066 | orchestrator | 2025-05-14 03:10:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:32.145158 | orchestrator | 2025-05-14 03:10:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:35.184571 | orchestrator | 2025-05-14 03:10:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:35.184681 | orchestrator | 2025-05-14 03:10:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:38.231066 | orchestrator | 2025-05-14 03:10:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:38.231150 | orchestrator | 2025-05-14 03:10:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:41.283797 | orchestrator | 2025-05-14 03:10:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:41.283989 | orchestrator | 2025-05-14 03:10:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:44.339380 | orchestrator | 2025-05-14 03:10:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:44.339479 | orchestrator | 2025-05-14 03:10:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:47.390482 | orchestrator | 2025-05-14 03:10:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:47.390608 | orchestrator | 2025-05-14 03:10:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:50.432615 | orchestrator | 2025-05-14 03:10:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:50.432713 | orchestrator | 2025-05-14 03:10:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:53.480463 | orchestrator | 2025-05-14 03:10:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:53.480566 | orchestrator | 2025-05-14 03:10:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:56.524678 | orchestrator | 2025-05-14 03:10:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:56.524790 | orchestrator | 2025-05-14 03:10:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:59.573820 | orchestrator | 2025-05-14 03:10:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:10:59.573954 | orchestrator | 2025-05-14 03:10:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:02.622256 | orchestrator | 2025-05-14 03:11:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:02.622338 | orchestrator | 2025-05-14 03:11:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:05.682227 | orchestrator | 2025-05-14 03:11:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:05.682300 | orchestrator | 2025-05-14 03:11:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:08.732592 | orchestrator | 2025-05-14 03:11:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:08.732728 | orchestrator | 2025-05-14 03:11:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:11.781298 | orchestrator | 2025-05-14 03:11:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:11.781398 | orchestrator | 2025-05-14 03:11:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:14.835228 | orchestrator | 2025-05-14 03:11:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:14.835328 | orchestrator | 2025-05-14 03:11:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:17.885607 | orchestrator | 2025-05-14 03:11:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:17.885706 | orchestrator | 2025-05-14 03:11:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:20.940234 | orchestrator | 2025-05-14 03:11:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:20.940325 | orchestrator | 2025-05-14 03:11:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:23.991987 | orchestrator | 2025-05-14 03:11:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:23.992118 | orchestrator | 2025-05-14 03:11:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:27.044513 | orchestrator | 2025-05-14 03:11:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:27.044628 | orchestrator | 2025-05-14 03:11:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:30.086942 | orchestrator | 2025-05-14 03:11:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:30.087078 | orchestrator | 2025-05-14 03:11:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:33.125345 | orchestrator | 2025-05-14 03:11:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:33.125465 | orchestrator | 2025-05-14 03:11:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:36.162776 | orchestrator | 2025-05-14 03:11:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:36.162882 | orchestrator | 2025-05-14 03:11:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:39.217151 | orchestrator | 2025-05-14 03:11:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:39.217280 | orchestrator | 2025-05-14 03:11:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:42.262771 | orchestrator | 2025-05-14 03:11:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:42.262970 | orchestrator | 2025-05-14 03:11:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:45.310154 | orchestrator | 2025-05-14 03:11:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:45.310257 | orchestrator | 2025-05-14 03:11:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:48.360422 | orchestrator | 2025-05-14 03:11:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:48.360525 | orchestrator | 2025-05-14 03:11:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:51.409440 | orchestrator | 2025-05-14 03:11:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:51.409868 | orchestrator | 2025-05-14 03:11:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:54.459799 | orchestrator | 2025-05-14 03:11:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:54.460041 | orchestrator | 2025-05-14 03:11:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:57.513583 | orchestrator | 2025-05-14 03:11:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:11:57.513684 | orchestrator | 2025-05-14 03:11:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:00.565559 | orchestrator | 2025-05-14 03:12:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:00.565666 | orchestrator | 2025-05-14 03:12:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:03.614525 | orchestrator | 2025-05-14 03:12:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:03.614611 | orchestrator | 2025-05-14 03:12:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:06.663470 | orchestrator | 2025-05-14 03:12:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:06.663577 | orchestrator | 2025-05-14 03:12:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:09.709428 | orchestrator | 2025-05-14 03:12:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:09.709530 | orchestrator | 2025-05-14 03:12:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:12.756909 | orchestrator | 2025-05-14 03:12:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:12.756992 | orchestrator | 2025-05-14 03:12:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:15.813525 | orchestrator | 2025-05-14 03:12:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:15.813627 | orchestrator | 2025-05-14 03:12:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:18.863524 | orchestrator | 2025-05-14 03:12:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:18.863621 | orchestrator | 2025-05-14 03:12:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:21.910179 | orchestrator | 2025-05-14 03:12:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:21.910268 | orchestrator | 2025-05-14 03:12:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:24.958428 | orchestrator | 2025-05-14 03:12:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:24.958504 | orchestrator | 2025-05-14 03:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:28.009797 | orchestrator | 2025-05-14 03:12:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:28.009967 | orchestrator | 2025-05-14 03:12:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:31.061379 | orchestrator | 2025-05-14 03:12:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:31.062194 | orchestrator | 2025-05-14 03:12:31 | INFO  | Task 4686493a-9e1f-49c5-841b-80136129e72a is in state STARTED 2025-05-14 03:12:31.062326 | orchestrator | 2025-05-14 03:12:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:34.121977 | orchestrator | 2025-05-14 03:12:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:34.122477 | orchestrator | 2025-05-14 03:12:34 | INFO  | Task 4686493a-9e1f-49c5-841b-80136129e72a is in state STARTED 2025-05-14 03:12:34.122615 | orchestrator | 2025-05-14 03:12:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:37.172619 | orchestrator | 2025-05-14 03:12:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:37.174180 | orchestrator | 2025-05-14 03:12:37 | INFO  | Task 4686493a-9e1f-49c5-841b-80136129e72a is in state STARTED 2025-05-14 03:12:37.174230 | orchestrator | 2025-05-14 03:12:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:40.225317 | orchestrator | 2025-05-14 03:12:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:40.225415 | orchestrator | 2025-05-14 03:12:40 | INFO  | Task 4686493a-9e1f-49c5-841b-80136129e72a is in state SUCCESS 2025-05-14 03:12:40.225429 | orchestrator | 2025-05-14 03:12:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:43.271843 | orchestrator | 2025-05-14 03:12:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:43.271995 | orchestrator | 2025-05-14 03:12:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:46.314866 | orchestrator | 2025-05-14 03:12:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:46.315037 | orchestrator | 2025-05-14 03:12:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:49.357136 | orchestrator | 2025-05-14 03:12:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:49.357244 | orchestrator | 2025-05-14 03:12:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:52.401817 | orchestrator | 2025-05-14 03:12:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:52.401972 | orchestrator | 2025-05-14 03:12:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:55.454498 | orchestrator | 2025-05-14 03:12:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:55.454599 | orchestrator | 2025-05-14 03:12:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:58.504635 | orchestrator | 2025-05-14 03:12:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:12:58.504735 | orchestrator | 2025-05-14 03:12:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:01.550430 | orchestrator | 2025-05-14 03:13:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:01.550536 | orchestrator | 2025-05-14 03:13:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:04.596002 | orchestrator | 2025-05-14 03:13:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:04.596104 | orchestrator | 2025-05-14 03:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:07.645708 | orchestrator | 2025-05-14 03:13:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:07.645818 | orchestrator | 2025-05-14 03:13:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:10.691272 | orchestrator | 2025-05-14 03:13:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:10.691372 | orchestrator | 2025-05-14 03:13:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:13.733252 | orchestrator | 2025-05-14 03:13:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:13.733357 | orchestrator | 2025-05-14 03:13:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:16.783088 | orchestrator | 2025-05-14 03:13:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:16.783191 | orchestrator | 2025-05-14 03:13:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:19.828352 | orchestrator | 2025-05-14 03:13:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:19.828440 | orchestrator | 2025-05-14 03:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:22.875042 | orchestrator | 2025-05-14 03:13:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:22.875139 | orchestrator | 2025-05-14 03:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:25.924646 | orchestrator | 2025-05-14 03:13:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:25.924742 | orchestrator | 2025-05-14 03:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:28.964462 | orchestrator | 2025-05-14 03:13:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:28.964557 | orchestrator | 2025-05-14 03:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:32.016993 | orchestrator | 2025-05-14 03:13:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:32.017126 | orchestrator | 2025-05-14 03:13:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:35.064276 | orchestrator | 2025-05-14 03:13:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:35.064377 | orchestrator | 2025-05-14 03:13:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:38.110505 | orchestrator | 2025-05-14 03:13:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:38.110609 | orchestrator | 2025-05-14 03:13:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:41.154241 | orchestrator | 2025-05-14 03:13:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:41.154331 | orchestrator | 2025-05-14 03:13:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:44.196136 | orchestrator | 2025-05-14 03:13:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:44.196247 | orchestrator | 2025-05-14 03:13:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:47.245489 | orchestrator | 2025-05-14 03:13:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:47.245559 | orchestrator | 2025-05-14 03:13:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:50.295536 | orchestrator | 2025-05-14 03:13:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:50.295635 | orchestrator | 2025-05-14 03:13:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:53.342001 | orchestrator | 2025-05-14 03:13:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:53.342241 | orchestrator | 2025-05-14 03:13:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:56.382761 | orchestrator | 2025-05-14 03:13:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:56.382892 | orchestrator | 2025-05-14 03:13:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:59.435103 | orchestrator | 2025-05-14 03:13:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:13:59.435201 | orchestrator | 2025-05-14 03:13:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:02.480342 | orchestrator | 2025-05-14 03:14:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:02.480451 | orchestrator | 2025-05-14 03:14:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:05.536721 | orchestrator | 2025-05-14 03:14:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:05.536817 | orchestrator | 2025-05-14 03:14:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:08.582141 | orchestrator | 2025-05-14 03:14:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:08.582248 | orchestrator | 2025-05-14 03:14:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:11.639532 | orchestrator | 2025-05-14 03:14:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:11.639627 | orchestrator | 2025-05-14 03:14:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:14.689557 | orchestrator | 2025-05-14 03:14:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:14.689676 | orchestrator | 2025-05-14 03:14:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:17.739794 | orchestrator | 2025-05-14 03:14:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:17.739938 | orchestrator | 2025-05-14 03:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:20.787193 | orchestrator | 2025-05-14 03:14:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:20.787300 | orchestrator | 2025-05-14 03:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:23.839656 | orchestrator | 2025-05-14 03:14:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:23.839772 | orchestrator | 2025-05-14 03:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:26.886256 | orchestrator | 2025-05-14 03:14:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:26.886364 | orchestrator | 2025-05-14 03:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:29.940353 | orchestrator | 2025-05-14 03:14:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:30.011621 | orchestrator | 2025-05-14 03:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:32.987497 | orchestrator | 2025-05-14 03:14:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:32.987625 | orchestrator | 2025-05-14 03:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:36.042208 | orchestrator | 2025-05-14 03:14:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:36.042284 | orchestrator | 2025-05-14 03:14:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:39.081016 | orchestrator | 2025-05-14 03:14:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:39.081111 | orchestrator | 2025-05-14 03:14:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:42.127063 | orchestrator | 2025-05-14 03:14:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:42.127181 | orchestrator | 2025-05-14 03:14:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:45.166989 | orchestrator | 2025-05-14 03:14:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:45.167078 | orchestrator | 2025-05-14 03:14:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:48.205080 | orchestrator | 2025-05-14 03:14:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:48.205180 | orchestrator | 2025-05-14 03:14:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:51.247633 | orchestrator | 2025-05-14 03:14:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:51.247742 | orchestrator | 2025-05-14 03:14:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:54.299445 | orchestrator | 2025-05-14 03:14:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:54.299544 | orchestrator | 2025-05-14 03:14:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:57.340476 | orchestrator | 2025-05-14 03:14:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:14:57.340579 | orchestrator | 2025-05-14 03:14:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:00.390337 | orchestrator | 2025-05-14 03:15:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:00.390457 | orchestrator | 2025-05-14 03:15:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:03.439472 | orchestrator | 2025-05-14 03:15:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:03.439578 | orchestrator | 2025-05-14 03:15:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:06.480051 | orchestrator | 2025-05-14 03:15:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:06.480169 | orchestrator | 2025-05-14 03:15:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:09.539236 | orchestrator | 2025-05-14 03:15:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:09.539399 | orchestrator | 2025-05-14 03:15:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:12.589269 | orchestrator | 2025-05-14 03:15:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:12.589375 | orchestrator | 2025-05-14 03:15:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:15.642920 | orchestrator | 2025-05-14 03:15:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:15.643040 | orchestrator | 2025-05-14 03:15:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:18.695965 | orchestrator | 2025-05-14 03:15:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:18.696071 | orchestrator | 2025-05-14 03:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:21.747329 | orchestrator | 2025-05-14 03:15:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:21.747463 | orchestrator | 2025-05-14 03:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:24.796409 | orchestrator | 2025-05-14 03:15:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:24.796514 | orchestrator | 2025-05-14 03:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:27.842846 | orchestrator | 2025-05-14 03:15:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:27.842969 | orchestrator | 2025-05-14 03:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:30.900307 | orchestrator | 2025-05-14 03:15:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:30.900406 | orchestrator | 2025-05-14 03:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:33.943962 | orchestrator | 2025-05-14 03:15:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:33.944092 | orchestrator | 2025-05-14 03:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:36.996701 | orchestrator | 2025-05-14 03:15:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:36.996852 | orchestrator | 2025-05-14 03:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:40.048158 | orchestrator | 2025-05-14 03:15:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:40.048280 | orchestrator | 2025-05-14 03:15:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:43.093199 | orchestrator | 2025-05-14 03:15:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:43.093296 | orchestrator | 2025-05-14 03:15:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:46.140846 | orchestrator | 2025-05-14 03:15:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:46.140942 | orchestrator | 2025-05-14 03:15:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:49.180210 | orchestrator | 2025-05-14 03:15:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:49.180312 | orchestrator | 2025-05-14 03:15:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:52.228768 | orchestrator | 2025-05-14 03:15:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:52.228866 | orchestrator | 2025-05-14 03:15:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:55.274415 | orchestrator | 2025-05-14 03:15:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:55.274526 | orchestrator | 2025-05-14 03:15:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:58.322097 | orchestrator | 2025-05-14 03:15:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:15:58.322218 | orchestrator | 2025-05-14 03:15:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:01.374800 | orchestrator | 2025-05-14 03:16:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:01.374898 | orchestrator | 2025-05-14 03:16:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:04.427375 | orchestrator | 2025-05-14 03:16:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:04.427481 | orchestrator | 2025-05-14 03:16:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:07.474444 | orchestrator | 2025-05-14 03:16:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:07.474543 | orchestrator | 2025-05-14 03:16:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:10.530420 | orchestrator | 2025-05-14 03:16:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:10.530557 | orchestrator | 2025-05-14 03:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:13.579286 | orchestrator | 2025-05-14 03:16:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:13.579407 | orchestrator | 2025-05-14 03:16:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:16.623992 | orchestrator | 2025-05-14 03:16:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:16.624128 | orchestrator | 2025-05-14 03:16:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:19.670860 | orchestrator | 2025-05-14 03:16:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:19.670982 | orchestrator | 2025-05-14 03:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:22.714512 | orchestrator | 2025-05-14 03:16:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:22.714622 | orchestrator | 2025-05-14 03:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:25.763153 | orchestrator | 2025-05-14 03:16:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:25.763249 | orchestrator | 2025-05-14 03:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:28.815022 | orchestrator | 2025-05-14 03:16:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:28.815135 | orchestrator | 2025-05-14 03:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:31.858634 | orchestrator | 2025-05-14 03:16:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:31.858818 | orchestrator | 2025-05-14 03:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:34.900515 | orchestrator | 2025-05-14 03:16:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:34.900637 | orchestrator | 2025-05-14 03:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:37.944140 | orchestrator | 2025-05-14 03:16:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:37.944238 | orchestrator | 2025-05-14 03:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:40.990916 | orchestrator | 2025-05-14 03:16:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:40.991021 | orchestrator | 2025-05-14 03:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:44.046388 | orchestrator | 2025-05-14 03:16:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:44.046503 | orchestrator | 2025-05-14 03:16:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:47.096112 | orchestrator | 2025-05-14 03:16:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:47.096234 | orchestrator | 2025-05-14 03:16:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:50.138850 | orchestrator | 2025-05-14 03:16:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:50.139008 | orchestrator | 2025-05-14 03:16:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:53.179968 | orchestrator | 2025-05-14 03:16:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:53.180047 | orchestrator | 2025-05-14 03:16:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:56.233368 | orchestrator | 2025-05-14 03:16:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:56.233458 | orchestrator | 2025-05-14 03:16:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:59.280264 | orchestrator | 2025-05-14 03:16:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:16:59.280369 | orchestrator | 2025-05-14 03:16:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:02.332608 | orchestrator | 2025-05-14 03:17:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:02.332714 | orchestrator | 2025-05-14 03:17:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:05.384836 | orchestrator | 2025-05-14 03:17:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:05.384940 | orchestrator | 2025-05-14 03:17:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:08.436093 | orchestrator | 2025-05-14 03:17:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:08.436209 | orchestrator | 2025-05-14 03:17:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:11.485095 | orchestrator | 2025-05-14 03:17:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:11.485193 | orchestrator | 2025-05-14 03:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:14.526434 | orchestrator | 2025-05-14 03:17:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:14.526503 | orchestrator | 2025-05-14 03:17:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:17.573991 | orchestrator | 2025-05-14 03:17:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:17.574159 | orchestrator | 2025-05-14 03:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:20.627818 | orchestrator | 2025-05-14 03:17:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:20.627915 | orchestrator | 2025-05-14 03:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:23.682897 | orchestrator | 2025-05-14 03:17:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:23.683008 | orchestrator | 2025-05-14 03:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:26.736443 | orchestrator | 2025-05-14 03:17:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:26.736542 | orchestrator | 2025-05-14 03:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:29.792362 | orchestrator | 2025-05-14 03:17:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:29.792468 | orchestrator | 2025-05-14 03:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:32.839922 | orchestrator | 2025-05-14 03:17:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:32.840020 | orchestrator | 2025-05-14 03:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:35.889095 | orchestrator | 2025-05-14 03:17:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:35.889200 | orchestrator | 2025-05-14 03:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:38.941138 | orchestrator | 2025-05-14 03:17:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:38.941258 | orchestrator | 2025-05-14 03:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:41.990266 | orchestrator | 2025-05-14 03:17:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:41.990378 | orchestrator | 2025-05-14 03:17:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:45.036696 | orchestrator | 2025-05-14 03:17:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:45.036812 | orchestrator | 2025-05-14 03:17:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:48.079833 | orchestrator | 2025-05-14 03:17:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:48.079914 | orchestrator | 2025-05-14 03:17:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:51.127004 | orchestrator | 2025-05-14 03:17:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:51.127111 | orchestrator | 2025-05-14 03:17:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:54.169999 | orchestrator | 2025-05-14 03:17:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:54.170111 | orchestrator | 2025-05-14 03:17:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:57.210110 | orchestrator | 2025-05-14 03:17:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:17:57.210209 | orchestrator | 2025-05-14 03:17:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:00.265298 | orchestrator | 2025-05-14 03:18:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:00.265403 | orchestrator | 2025-05-14 03:18:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:03.315375 | orchestrator | 2025-05-14 03:18:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:03.315481 | orchestrator | 2025-05-14 03:18:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:06.367967 | orchestrator | 2025-05-14 03:18:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:06.368074 | orchestrator | 2025-05-14 03:18:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:09.421489 | orchestrator | 2025-05-14 03:18:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:09.421675 | orchestrator | 2025-05-14 03:18:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:12.470510 | orchestrator | 2025-05-14 03:18:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:12.470671 | orchestrator | 2025-05-14 03:18:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:15.530083 | orchestrator | 2025-05-14 03:18:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:15.530195 | orchestrator | 2025-05-14 03:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:18.578434 | orchestrator | 2025-05-14 03:18:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:18.578518 | orchestrator | 2025-05-14 03:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:21.630673 | orchestrator | 2025-05-14 03:18:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:21.630768 | orchestrator | 2025-05-14 03:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:24.683521 | orchestrator | 2025-05-14 03:18:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:24.683717 | orchestrator | 2025-05-14 03:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:27.732516 | orchestrator | 2025-05-14 03:18:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:27.732674 | orchestrator | 2025-05-14 03:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:30.794736 | orchestrator | 2025-05-14 03:18:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:30.794837 | orchestrator | 2025-05-14 03:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:33.838239 | orchestrator | 2025-05-14 03:18:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:33.838358 | orchestrator | 2025-05-14 03:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:36.890908 | orchestrator | 2025-05-14 03:18:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:36.891033 | orchestrator | 2025-05-14 03:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:39.935922 | orchestrator | 2025-05-14 03:18:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:39.936025 | orchestrator | 2025-05-14 03:18:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:42.988912 | orchestrator | 2025-05-14 03:18:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:42.989016 | orchestrator | 2025-05-14 03:18:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:46.042421 | orchestrator | 2025-05-14 03:18:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:46.042490 | orchestrator | 2025-05-14 03:18:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:49.091401 | orchestrator | 2025-05-14 03:18:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:49.091502 | orchestrator | 2025-05-14 03:18:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:52.137317 | orchestrator | 2025-05-14 03:18:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:52.137417 | orchestrator | 2025-05-14 03:18:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:55.183712 | orchestrator | 2025-05-14 03:18:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:55.183853 | orchestrator | 2025-05-14 03:18:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:58.226948 | orchestrator | 2025-05-14 03:18:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:18:58.227043 | orchestrator | 2025-05-14 03:18:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:01.270274 | orchestrator | 2025-05-14 03:19:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:01.270425 | orchestrator | 2025-05-14 03:19:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:04.325458 | orchestrator | 2025-05-14 03:19:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:04.325645 | orchestrator | 2025-05-14 03:19:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:07.383663 | orchestrator | 2025-05-14 03:19:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:07.383805 | orchestrator | 2025-05-14 03:19:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:10.437858 | orchestrator | 2025-05-14 03:19:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:10.437990 | orchestrator | 2025-05-14 03:19:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:13.492924 | orchestrator | 2025-05-14 03:19:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:13.493024 | orchestrator | 2025-05-14 03:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:16.549051 | orchestrator | 2025-05-14 03:19:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:16.549153 | orchestrator | 2025-05-14 03:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:19.590337 | orchestrator | 2025-05-14 03:19:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:19.590440 | orchestrator | 2025-05-14 03:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:22.637428 | orchestrator | 2025-05-14 03:19:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:22.637574 | orchestrator | 2025-05-14 03:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:25.676139 | orchestrator | 2025-05-14 03:19:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:25.676223 | orchestrator | 2025-05-14 03:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:28.721164 | orchestrator | 2025-05-14 03:19:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:28.721315 | orchestrator | 2025-05-14 03:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:31.775287 | orchestrator | 2025-05-14 03:19:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:31.775436 | orchestrator | 2025-05-14 03:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:34.818804 | orchestrator | 2025-05-14 03:19:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:34.818953 | orchestrator | 2025-05-14 03:19:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:37.876696 | orchestrator | 2025-05-14 03:19:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:37.876803 | orchestrator | 2025-05-14 03:19:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:40.926864 | orchestrator | 2025-05-14 03:19:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:40.926945 | orchestrator | 2025-05-14 03:19:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:43.967968 | orchestrator | 2025-05-14 03:19:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:43.968145 | orchestrator | 2025-05-14 03:19:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:47.021302 | orchestrator | 2025-05-14 03:19:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:47.021418 | orchestrator | 2025-05-14 03:19:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:50.063567 | orchestrator | 2025-05-14 03:19:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:50.063671 | orchestrator | 2025-05-14 03:19:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:53.101754 | orchestrator | 2025-05-14 03:19:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:53.101853 | orchestrator | 2025-05-14 03:19:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:56.136815 | orchestrator | 2025-05-14 03:19:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:56.136921 | orchestrator | 2025-05-14 03:19:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:59.183053 | orchestrator | 2025-05-14 03:19:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:19:59.183153 | orchestrator | 2025-05-14 03:19:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:02.228054 | orchestrator | 2025-05-14 03:20:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:02.228199 | orchestrator | 2025-05-14 03:20:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:05.275654 | orchestrator | 2025-05-14 03:20:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:05.275731 | orchestrator | 2025-05-14 03:20:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:08.327455 | orchestrator | 2025-05-14 03:20:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:08.327702 | orchestrator | 2025-05-14 03:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:11.381555 | orchestrator | 2025-05-14 03:20:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:11.381657 | orchestrator | 2025-05-14 03:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:14.420691 | orchestrator | 2025-05-14 03:20:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:14.420787 | orchestrator | 2025-05-14 03:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:17.462283 | orchestrator | 2025-05-14 03:20:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:17.462403 | orchestrator | 2025-05-14 03:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:20.510859 | orchestrator | 2025-05-14 03:20:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:20.510969 | orchestrator | 2025-05-14 03:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:23.562115 | orchestrator | 2025-05-14 03:20:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:23.562238 | orchestrator | 2025-05-14 03:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:26.613319 | orchestrator | 2025-05-14 03:20:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:26.613422 | orchestrator | 2025-05-14 03:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:29.662504 | orchestrator | 2025-05-14 03:20:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:29.662577 | orchestrator | 2025-05-14 03:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:32.712306 | orchestrator | 2025-05-14 03:20:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:32.712487 | orchestrator | 2025-05-14 03:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:35.765077 | orchestrator | 2025-05-14 03:20:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:35.765166 | orchestrator | 2025-05-14 03:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:38.812096 | orchestrator | 2025-05-14 03:20:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:38.812181 | orchestrator | 2025-05-14 03:20:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:41.863901 | orchestrator | 2025-05-14 03:20:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:41.864009 | orchestrator | 2025-05-14 03:20:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:44.910230 | orchestrator | 2025-05-14 03:20:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:44.910307 | orchestrator | 2025-05-14 03:20:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:47.958159 | orchestrator | 2025-05-14 03:20:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:47.958254 | orchestrator | 2025-05-14 03:20:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:51.006597 | orchestrator | 2025-05-14 03:20:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:51.006736 | orchestrator | 2025-05-14 03:20:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:54.072974 | orchestrator | 2025-05-14 03:20:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:54.073079 | orchestrator | 2025-05-14 03:20:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:57.115068 | orchestrator | 2025-05-14 03:20:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:20:57.115149 | orchestrator | 2025-05-14 03:20:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:00.158577 | orchestrator | 2025-05-14 03:21:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:00.158698 | orchestrator | 2025-05-14 03:21:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:03.192894 | orchestrator | 2025-05-14 03:21:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:03.193001 | orchestrator | 2025-05-14 03:21:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:06.247528 | orchestrator | 2025-05-14 03:21:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:06.247611 | orchestrator | 2025-05-14 03:21:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:09.292178 | orchestrator | 2025-05-14 03:21:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:09.292304 | orchestrator | 2025-05-14 03:21:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:12.342245 | orchestrator | 2025-05-14 03:21:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:12.342327 | orchestrator | 2025-05-14 03:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:15.392946 | orchestrator | 2025-05-14 03:21:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:15.393051 | orchestrator | 2025-05-14 03:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:18.443414 | orchestrator | 2025-05-14 03:21:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:18.443534 | orchestrator | 2025-05-14 03:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:21.495560 | orchestrator | 2025-05-14 03:21:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:21.495691 | orchestrator | 2025-05-14 03:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:24.540309 | orchestrator | 2025-05-14 03:21:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:24.540506 | orchestrator | 2025-05-14 03:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:27.591211 | orchestrator | 2025-05-14 03:21:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:27.591350 | orchestrator | 2025-05-14 03:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:30.641562 | orchestrator | 2025-05-14 03:21:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:30.641682 | orchestrator | 2025-05-14 03:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:33.695663 | orchestrator | 2025-05-14 03:21:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:33.695786 | orchestrator | 2025-05-14 03:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:36.744619 | orchestrator | 2025-05-14 03:21:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:36.744743 | orchestrator | 2025-05-14 03:21:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:39.794544 | orchestrator | 2025-05-14 03:21:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:39.794785 | orchestrator | 2025-05-14 03:21:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:42.839673 | orchestrator | 2025-05-14 03:21:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:42.839795 | orchestrator | 2025-05-14 03:21:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:45.890514 | orchestrator | 2025-05-14 03:21:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:45.890617 | orchestrator | 2025-05-14 03:21:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:48.946603 | orchestrator | 2025-05-14 03:21:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:48.946730 | orchestrator | 2025-05-14 03:21:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:52.011481 | orchestrator | 2025-05-14 03:21:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:52.011607 | orchestrator | 2025-05-14 03:21:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:55.055234 | orchestrator | 2025-05-14 03:21:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:55.055325 | orchestrator | 2025-05-14 03:21:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:58.092973 | orchestrator | 2025-05-14 03:21:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:21:58.093077 | orchestrator | 2025-05-14 03:21:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:01.135404 | orchestrator | 2025-05-14 03:22:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:01.135486 | orchestrator | 2025-05-14 03:22:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:04.181669 | orchestrator | 2025-05-14 03:22:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:04.181799 | orchestrator | 2025-05-14 03:22:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:07.230751 | orchestrator | 2025-05-14 03:22:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:07.230833 | orchestrator | 2025-05-14 03:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:10.282197 | orchestrator | 2025-05-14 03:22:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:10.282303 | orchestrator | 2025-05-14 03:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:13.332012 | orchestrator | 2025-05-14 03:22:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:13.332115 | orchestrator | 2025-05-14 03:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:16.374995 | orchestrator | 2025-05-14 03:22:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:16.375106 | orchestrator | 2025-05-14 03:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:19.413262 | orchestrator | 2025-05-14 03:22:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:19.413417 | orchestrator | 2025-05-14 03:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:22.462840 | orchestrator | 2025-05-14 03:22:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:22.462945 | orchestrator | 2025-05-14 03:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:25.516123 | orchestrator | 2025-05-14 03:22:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:25.516257 | orchestrator | 2025-05-14 03:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:28.574685 | orchestrator | 2025-05-14 03:22:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:28.577045 | orchestrator | 2025-05-14 03:22:28 | INFO  | Task aa426bc6-3c58-4a77-9206-46d6163bd361 is in state STARTED 2025-05-14 03:22:28.577164 | orchestrator | 2025-05-14 03:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:31.635595 | orchestrator | 2025-05-14 03:22:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:31.637773 | orchestrator | 2025-05-14 03:22:31 | INFO  | Task aa426bc6-3c58-4a77-9206-46d6163bd361 is in state STARTED 2025-05-14 03:22:31.637852 | orchestrator | 2025-05-14 03:22:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:34.688568 | orchestrator | 2025-05-14 03:22:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:34.689882 | orchestrator | 2025-05-14 03:22:34 | INFO  | Task aa426bc6-3c58-4a77-9206-46d6163bd361 is in state STARTED 2025-05-14 03:22:34.689962 | orchestrator | 2025-05-14 03:22:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:37.751485 | orchestrator | 2025-05-14 03:22:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:37.756134 | orchestrator | 2025-05-14 03:22:37 | INFO  | Task aa426bc6-3c58-4a77-9206-46d6163bd361 is in state STARTED 2025-05-14 03:22:37.756174 | orchestrator | 2025-05-14 03:22:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:40.801185 | orchestrator | 2025-05-14 03:22:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:40.802353 | orchestrator | 2025-05-14 03:22:40 | INFO  | Task aa426bc6-3c58-4a77-9206-46d6163bd361 is in state SUCCESS 2025-05-14 03:22:40.802404 | orchestrator | 2025-05-14 03:22:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:43.854151 | orchestrator | 2025-05-14 03:22:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:43.854240 | orchestrator | 2025-05-14 03:22:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:46.913863 | orchestrator | 2025-05-14 03:22:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:46.913967 | orchestrator | 2025-05-14 03:22:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:49.962533 | orchestrator | 2025-05-14 03:22:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:49.962638 | orchestrator | 2025-05-14 03:22:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:53.015949 | orchestrator | 2025-05-14 03:22:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:53.016054 | orchestrator | 2025-05-14 03:22:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:56.055022 | orchestrator | 2025-05-14 03:22:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:56.055121 | orchestrator | 2025-05-14 03:22:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:59.095512 | orchestrator | 2025-05-14 03:22:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:22:59.095613 | orchestrator | 2025-05-14 03:22:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:02.137028 | orchestrator | 2025-05-14 03:23:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:02.137147 | orchestrator | 2025-05-14 03:23:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:05.179959 | orchestrator | 2025-05-14 03:23:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:05.180067 | orchestrator | 2025-05-14 03:23:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:08.223867 | orchestrator | 2025-05-14 03:23:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:08.223993 | orchestrator | 2025-05-14 03:23:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:11.266750 | orchestrator | 2025-05-14 03:23:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:11.266921 | orchestrator | 2025-05-14 03:23:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:14.305719 | orchestrator | 2025-05-14 03:23:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:14.305820 | orchestrator | 2025-05-14 03:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:17.352176 | orchestrator | 2025-05-14 03:23:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:17.352321 | orchestrator | 2025-05-14 03:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:20.395326 | orchestrator | 2025-05-14 03:23:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:20.395427 | orchestrator | 2025-05-14 03:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:23.444301 | orchestrator | 2025-05-14 03:23:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:23.444434 | orchestrator | 2025-05-14 03:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:26.496636 | orchestrator | 2025-05-14 03:23:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:26.496696 | orchestrator | 2025-05-14 03:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:29.543556 | orchestrator | 2025-05-14 03:23:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:29.543680 | orchestrator | 2025-05-14 03:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:32.593418 | orchestrator | 2025-05-14 03:23:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:32.593514 | orchestrator | 2025-05-14 03:23:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:35.641531 | orchestrator | 2025-05-14 03:23:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:35.641638 | orchestrator | 2025-05-14 03:23:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:38.693545 | orchestrator | 2025-05-14 03:23:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:38.693647 | orchestrator | 2025-05-14 03:23:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:41.748623 | orchestrator | 2025-05-14 03:23:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:41.748733 | orchestrator | 2025-05-14 03:23:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:44.805262 | orchestrator | 2025-05-14 03:23:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:44.805339 | orchestrator | 2025-05-14 03:23:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:47.853095 | orchestrator | 2025-05-14 03:23:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:47.853328 | orchestrator | 2025-05-14 03:23:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:50.894385 | orchestrator | 2025-05-14 03:23:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:50.894511 | orchestrator | 2025-05-14 03:23:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:53.942238 | orchestrator | 2025-05-14 03:23:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:53.942320 | orchestrator | 2025-05-14 03:23:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:56.984275 | orchestrator | 2025-05-14 03:23:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:23:56.984374 | orchestrator | 2025-05-14 03:23:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:00.032830 | orchestrator | 2025-05-14 03:24:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:00.032936 | orchestrator | 2025-05-14 03:24:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:03.079913 | orchestrator | 2025-05-14 03:24:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:03.080013 | orchestrator | 2025-05-14 03:24:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:06.131430 | orchestrator | 2025-05-14 03:24:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:06.131515 | orchestrator | 2025-05-14 03:24:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:09.173341 | orchestrator | 2025-05-14 03:24:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:09.173466 | orchestrator | 2025-05-14 03:24:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:12.216468 | orchestrator | 2025-05-14 03:24:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:12.216593 | orchestrator | 2025-05-14 03:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:15.262104 | orchestrator | 2025-05-14 03:24:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:15.262276 | orchestrator | 2025-05-14 03:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:18.311744 | orchestrator | 2025-05-14 03:24:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:18.311848 | orchestrator | 2025-05-14 03:24:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:21.357623 | orchestrator | 2025-05-14 03:24:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:21.357719 | orchestrator | 2025-05-14 03:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:24.405829 | orchestrator | 2025-05-14 03:24:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:24.405966 | orchestrator | 2025-05-14 03:24:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:27.451888 | orchestrator | 2025-05-14 03:24:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:27.451978 | orchestrator | 2025-05-14 03:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:30.497799 | orchestrator | 2025-05-14 03:24:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:30.497890 | orchestrator | 2025-05-14 03:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:33.549428 | orchestrator | 2025-05-14 03:24:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:33.549588 | orchestrator | 2025-05-14 03:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:36.591365 | orchestrator | 2025-05-14 03:24:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:36.591495 | orchestrator | 2025-05-14 03:24:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:39.639060 | orchestrator | 2025-05-14 03:24:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:39.639185 | orchestrator | 2025-05-14 03:24:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:42.686194 | orchestrator | 2025-05-14 03:24:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:42.686299 | orchestrator | 2025-05-14 03:24:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:45.734478 | orchestrator | 2025-05-14 03:24:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:45.734547 | orchestrator | 2025-05-14 03:24:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:48.784052 | orchestrator | 2025-05-14 03:24:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:48.784204 | orchestrator | 2025-05-14 03:24:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:51.832052 | orchestrator | 2025-05-14 03:24:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:51.832279 | orchestrator | 2025-05-14 03:24:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:54.874604 | orchestrator | 2025-05-14 03:24:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:54.874726 | orchestrator | 2025-05-14 03:24:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:57.923120 | orchestrator | 2025-05-14 03:24:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:24:57.923212 | orchestrator | 2025-05-14 03:24:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:00.968798 | orchestrator | 2025-05-14 03:25:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:00.968903 | orchestrator | 2025-05-14 03:25:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:04.017395 | orchestrator | 2025-05-14 03:25:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:04.017495 | orchestrator | 2025-05-14 03:25:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:07.068496 | orchestrator | 2025-05-14 03:25:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:07.068601 | orchestrator | 2025-05-14 03:25:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:10.116022 | orchestrator | 2025-05-14 03:25:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:10.116119 | orchestrator | 2025-05-14 03:25:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:13.166295 | orchestrator | 2025-05-14 03:25:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:13.166398 | orchestrator | 2025-05-14 03:25:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:16.207585 | orchestrator | 2025-05-14 03:25:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:16.207691 | orchestrator | 2025-05-14 03:25:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:19.255887 | orchestrator | 2025-05-14 03:25:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:19.256019 | orchestrator | 2025-05-14 03:25:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:22.302926 | orchestrator | 2025-05-14 03:25:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:22.303028 | orchestrator | 2025-05-14 03:25:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:25.349807 | orchestrator | 2025-05-14 03:25:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:25.349914 | orchestrator | 2025-05-14 03:25:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:28.397842 | orchestrator | 2025-05-14 03:25:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:28.397939 | orchestrator | 2025-05-14 03:25:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:31.448304 | orchestrator | 2025-05-14 03:25:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:31.448384 | orchestrator | 2025-05-14 03:25:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:34.499780 | orchestrator | 2025-05-14 03:25:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:34.499874 | orchestrator | 2025-05-14 03:25:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:37.541231 | orchestrator | 2025-05-14 03:25:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:37.541336 | orchestrator | 2025-05-14 03:25:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:40.582620 | orchestrator | 2025-05-14 03:25:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:40.582792 | orchestrator | 2025-05-14 03:25:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:43.633974 | orchestrator | 2025-05-14 03:25:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:43.634306 | orchestrator | 2025-05-14 03:25:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:46.684783 | orchestrator | 2025-05-14 03:25:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:46.684896 | orchestrator | 2025-05-14 03:25:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:49.739845 | orchestrator | 2025-05-14 03:25:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:49.739954 | orchestrator | 2025-05-14 03:25:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:52.786550 | orchestrator | 2025-05-14 03:25:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:52.786648 | orchestrator | 2025-05-14 03:25:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:55.829967 | orchestrator | 2025-05-14 03:25:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:55.830090 | orchestrator | 2025-05-14 03:25:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:58.880445 | orchestrator | 2025-05-14 03:25:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:25:58.880544 | orchestrator | 2025-05-14 03:25:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:01.922847 | orchestrator | 2025-05-14 03:26:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:01.923008 | orchestrator | 2025-05-14 03:26:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:04.977419 | orchestrator | 2025-05-14 03:26:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:04.977516 | orchestrator | 2025-05-14 03:26:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:08.027925 | orchestrator | 2025-05-14 03:26:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:08.028037 | orchestrator | 2025-05-14 03:26:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:11.074798 | orchestrator | 2025-05-14 03:26:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:11.074888 | orchestrator | 2025-05-14 03:26:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:14.117080 | orchestrator | 2025-05-14 03:26:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:14.117225 | orchestrator | 2025-05-14 03:26:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:17.166317 | orchestrator | 2025-05-14 03:26:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:17.166395 | orchestrator | 2025-05-14 03:26:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:20.213566 | orchestrator | 2025-05-14 03:26:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:20.213676 | orchestrator | 2025-05-14 03:26:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:23.256990 | orchestrator | 2025-05-14 03:26:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:23.257121 | orchestrator | 2025-05-14 03:26:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:26.304587 | orchestrator | 2025-05-14 03:26:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:26.304693 | orchestrator | 2025-05-14 03:26:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:29.366767 | orchestrator | 2025-05-14 03:26:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:29.366861 | orchestrator | 2025-05-14 03:26:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:32.414974 | orchestrator | 2025-05-14 03:26:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:32.415080 | orchestrator | 2025-05-14 03:26:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:35.471539 | orchestrator | 2025-05-14 03:26:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:35.471657 | orchestrator | 2025-05-14 03:26:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:38.522926 | orchestrator | 2025-05-14 03:26:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:38.523049 | orchestrator | 2025-05-14 03:26:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:41.576765 | orchestrator | 2025-05-14 03:26:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:41.576842 | orchestrator | 2025-05-14 03:26:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:44.624605 | orchestrator | 2025-05-14 03:26:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:44.624777 | orchestrator | 2025-05-14 03:26:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:47.673579 | orchestrator | 2025-05-14 03:26:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:47.673660 | orchestrator | 2025-05-14 03:26:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:50.728204 | orchestrator | 2025-05-14 03:26:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:50.728339 | orchestrator | 2025-05-14 03:26:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:53.769905 | orchestrator | 2025-05-14 03:26:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:53.769996 | orchestrator | 2025-05-14 03:26:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:56.814112 | orchestrator | 2025-05-14 03:26:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:56.814224 | orchestrator | 2025-05-14 03:26:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:59.865284 | orchestrator | 2025-05-14 03:26:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:26:59.865395 | orchestrator | 2025-05-14 03:26:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:02.916093 | orchestrator | 2025-05-14 03:27:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:02.916249 | orchestrator | 2025-05-14 03:27:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:05.971935 | orchestrator | 2025-05-14 03:27:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:05.972033 | orchestrator | 2025-05-14 03:27:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:09.026325 | orchestrator | 2025-05-14 03:27:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:09.026508 | orchestrator | 2025-05-14 03:27:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:12.076739 | orchestrator | 2025-05-14 03:27:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:12.076865 | orchestrator | 2025-05-14 03:27:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:15.121222 | orchestrator | 2025-05-14 03:27:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:15.121329 | orchestrator | 2025-05-14 03:27:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:18.164735 | orchestrator | 2025-05-14 03:27:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:18.164835 | orchestrator | 2025-05-14 03:27:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:21.202716 | orchestrator | 2025-05-14 03:27:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:21.202792 | orchestrator | 2025-05-14 03:27:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:24.250327 | orchestrator | 2025-05-14 03:27:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:24.250478 | orchestrator | 2025-05-14 03:27:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:27.298315 | orchestrator | 2025-05-14 03:27:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:27.298452 | orchestrator | 2025-05-14 03:27:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:30.347135 | orchestrator | 2025-05-14 03:27:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:30.347234 | orchestrator | 2025-05-14 03:27:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:33.394165 | orchestrator | 2025-05-14 03:27:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:33.394255 | orchestrator | 2025-05-14 03:27:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:36.438557 | orchestrator | 2025-05-14 03:27:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:36.438689 | orchestrator | 2025-05-14 03:27:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:39.486974 | orchestrator | 2025-05-14 03:27:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:39.487072 | orchestrator | 2025-05-14 03:27:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:42.538415 | orchestrator | 2025-05-14 03:27:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:42.538533 | orchestrator | 2025-05-14 03:27:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:45.580385 | orchestrator | 2025-05-14 03:27:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:45.580497 | orchestrator | 2025-05-14 03:27:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:48.626741 | orchestrator | 2025-05-14 03:27:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:48.626870 | orchestrator | 2025-05-14 03:27:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:51.674377 | orchestrator | 2025-05-14 03:27:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:51.674485 | orchestrator | 2025-05-14 03:27:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:54.720065 | orchestrator | 2025-05-14 03:27:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:54.720193 | orchestrator | 2025-05-14 03:27:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:57.763871 | orchestrator | 2025-05-14 03:27:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:27:57.763972 | orchestrator | 2025-05-14 03:27:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:00.807878 | orchestrator | 2025-05-14 03:28:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:00.808004 | orchestrator | 2025-05-14 03:28:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:03.854945 | orchestrator | 2025-05-14 03:28:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:03.855058 | orchestrator | 2025-05-14 03:28:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:06.903066 | orchestrator | 2025-05-14 03:28:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:06.903229 | orchestrator | 2025-05-14 03:28:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:09.967859 | orchestrator | 2025-05-14 03:28:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:09.967990 | orchestrator | 2025-05-14 03:28:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:13.022858 | orchestrator | 2025-05-14 03:28:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:13.022947 | orchestrator | 2025-05-14 03:28:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:16.065397 | orchestrator | 2025-05-14 03:28:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:16.065473 | orchestrator | 2025-05-14 03:28:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:19.105935 | orchestrator | 2025-05-14 03:28:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:19.106126 | orchestrator | 2025-05-14 03:28:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:22.154938 | orchestrator | 2025-05-14 03:28:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:22.155254 | orchestrator | 2025-05-14 03:28:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:25.200501 | orchestrator | 2025-05-14 03:28:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:25.200628 | orchestrator | 2025-05-14 03:28:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:28.254588 | orchestrator | 2025-05-14 03:28:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:28.254736 | orchestrator | 2025-05-14 03:28:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:31.301322 | orchestrator | 2025-05-14 03:28:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:31.301454 | orchestrator | 2025-05-14 03:28:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:34.359189 | orchestrator | 2025-05-14 03:28:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:34.359299 | orchestrator | 2025-05-14 03:28:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:37.407983 | orchestrator | 2025-05-14 03:28:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:37.408182 | orchestrator | 2025-05-14 03:28:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:40.458666 | orchestrator | 2025-05-14 03:28:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:40.458765 | orchestrator | 2025-05-14 03:28:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:43.507620 | orchestrator | 2025-05-14 03:28:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:43.507716 | orchestrator | 2025-05-14 03:28:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:46.561278 | orchestrator | 2025-05-14 03:28:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:46.561380 | orchestrator | 2025-05-14 03:28:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:49.608705 | orchestrator | 2025-05-14 03:28:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:49.608788 | orchestrator | 2025-05-14 03:28:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:52.657562 | orchestrator | 2025-05-14 03:28:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:52.657671 | orchestrator | 2025-05-14 03:28:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:55.702522 | orchestrator | 2025-05-14 03:28:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:55.702648 | orchestrator | 2025-05-14 03:28:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:58.757596 | orchestrator | 2025-05-14 03:28:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:28:58.757696 | orchestrator | 2025-05-14 03:28:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:01.808016 | orchestrator | 2025-05-14 03:29:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:01.808141 | orchestrator | 2025-05-14 03:29:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:04.849706 | orchestrator | 2025-05-14 03:29:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:04.849833 | orchestrator | 2025-05-14 03:29:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:07.899890 | orchestrator | 2025-05-14 03:29:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:07.900063 | orchestrator | 2025-05-14 03:29:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:10.950645 | orchestrator | 2025-05-14 03:29:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:10.950734 | orchestrator | 2025-05-14 03:29:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:13.999535 | orchestrator | 2025-05-14 03:29:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:13.999629 | orchestrator | 2025-05-14 03:29:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:17.054077 | orchestrator | 2025-05-14 03:29:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:17.054173 | orchestrator | 2025-05-14 03:29:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:20.089330 | orchestrator | 2025-05-14 03:29:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:20.089428 | orchestrator | 2025-05-14 03:29:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:23.133206 | orchestrator | 2025-05-14 03:29:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:23.133314 | orchestrator | 2025-05-14 03:29:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:26.175065 | orchestrator | 2025-05-14 03:29:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:26.175244 | orchestrator | 2025-05-14 03:29:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:29.214369 | orchestrator | 2025-05-14 03:29:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:29.214462 | orchestrator | 2025-05-14 03:29:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:32.267381 | orchestrator | 2025-05-14 03:29:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:32.267598 | orchestrator | 2025-05-14 03:29:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:35.321212 | orchestrator | 2025-05-14 03:29:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:35.321332 | orchestrator | 2025-05-14 03:29:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:38.372164 | orchestrator | 2025-05-14 03:29:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:38.372270 | orchestrator | 2025-05-14 03:29:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:41.425555 | orchestrator | 2025-05-14 03:29:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:41.425657 | orchestrator | 2025-05-14 03:29:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:44.473597 | orchestrator | 2025-05-14 03:29:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:44.473701 | orchestrator | 2025-05-14 03:29:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:47.528049 | orchestrator | 2025-05-14 03:29:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:47.528230 | orchestrator | 2025-05-14 03:29:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:50.578849 | orchestrator | 2025-05-14 03:29:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:50.578962 | orchestrator | 2025-05-14 03:29:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:53.626594 | orchestrator | 2025-05-14 03:29:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:53.626725 | orchestrator | 2025-05-14 03:29:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:56.671300 | orchestrator | 2025-05-14 03:29:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:56.671398 | orchestrator | 2025-05-14 03:29:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:59.717657 | orchestrator | 2025-05-14 03:29:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:29:59.717784 | orchestrator | 2025-05-14 03:29:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:02.770865 | orchestrator | 2025-05-14 03:30:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:02.771008 | orchestrator | 2025-05-14 03:30:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:05.822163 | orchestrator | 2025-05-14 03:30:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:05.822278 | orchestrator | 2025-05-14 03:30:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:08.865870 | orchestrator | 2025-05-14 03:30:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:08.865958 | orchestrator | 2025-05-14 03:30:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:11.918359 | orchestrator | 2025-05-14 03:30:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:11.918469 | orchestrator | 2025-05-14 03:30:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:14.968629 | orchestrator | 2025-05-14 03:30:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:14.968749 | orchestrator | 2025-05-14 03:30:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:18.022633 | orchestrator | 2025-05-14 03:30:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:18.022739 | orchestrator | 2025-05-14 03:30:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:21.061717 | orchestrator | 2025-05-14 03:30:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:21.061787 | orchestrator | 2025-05-14 03:30:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:24.100918 | orchestrator | 2025-05-14 03:30:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:24.100994 | orchestrator | 2025-05-14 03:30:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:27.151926 | orchestrator | 2025-05-14 03:30:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:27.152022 | orchestrator | 2025-05-14 03:30:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:30.196143 | orchestrator | 2025-05-14 03:30:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:30.196277 | orchestrator | 2025-05-14 03:30:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:33.238317 | orchestrator | 2025-05-14 03:30:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:33.238403 | orchestrator | 2025-05-14 03:30:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:36.284392 | orchestrator | 2025-05-14 03:30:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:36.284499 | orchestrator | 2025-05-14 03:30:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:39.328348 | orchestrator | 2025-05-14 03:30:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:39.328468 | orchestrator | 2025-05-14 03:30:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:42.380125 | orchestrator | 2025-05-14 03:30:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:42.380233 | orchestrator | 2025-05-14 03:30:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:45.432481 | orchestrator | 2025-05-14 03:30:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:45.432557 | orchestrator | 2025-05-14 03:30:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:48.478989 | orchestrator | 2025-05-14 03:30:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:48.479156 | orchestrator | 2025-05-14 03:30:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:51.524667 | orchestrator | 2025-05-14 03:30:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:51.524769 | orchestrator | 2025-05-14 03:30:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:54.569308 | orchestrator | 2025-05-14 03:30:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:54.569426 | orchestrator | 2025-05-14 03:30:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:57.618627 | orchestrator | 2025-05-14 03:30:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:30:57.618709 | orchestrator | 2025-05-14 03:30:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:00.658980 | orchestrator | 2025-05-14 03:31:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:00.659191 | orchestrator | 2025-05-14 03:31:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:03.706652 | orchestrator | 2025-05-14 03:31:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:03.706753 | orchestrator | 2025-05-14 03:31:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:06.752617 | orchestrator | 2025-05-14 03:31:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:06.752743 | orchestrator | 2025-05-14 03:31:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:09.805986 | orchestrator | 2025-05-14 03:31:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:09.806181 | orchestrator | 2025-05-14 03:31:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:12.854462 | orchestrator | 2025-05-14 03:31:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:12.854596 | orchestrator | 2025-05-14 03:31:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:15.903251 | orchestrator | 2025-05-14 03:31:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:15.903331 | orchestrator | 2025-05-14 03:31:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:18.954917 | orchestrator | 2025-05-14 03:31:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:18.955019 | orchestrator | 2025-05-14 03:31:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:22.007597 | orchestrator | 2025-05-14 03:31:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:22.007699 | orchestrator | 2025-05-14 03:31:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:25.052298 | orchestrator | 2025-05-14 03:31:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:25.052378 | orchestrator | 2025-05-14 03:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:28.110375 | orchestrator | 2025-05-14 03:31:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:28.110486 | orchestrator | 2025-05-14 03:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:31.159013 | orchestrator | 2025-05-14 03:31:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:31.159191 | orchestrator | 2025-05-14 03:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:34.197617 | orchestrator | 2025-05-14 03:31:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:34.197704 | orchestrator | 2025-05-14 03:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:37.243956 | orchestrator | 2025-05-14 03:31:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:37.244101 | orchestrator | 2025-05-14 03:31:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:40.301537 | orchestrator | 2025-05-14 03:31:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:40.301654 | orchestrator | 2025-05-14 03:31:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:43.346835 | orchestrator | 2025-05-14 03:31:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:43.346942 | orchestrator | 2025-05-14 03:31:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:46.393807 | orchestrator | 2025-05-14 03:31:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:46.393962 | orchestrator | 2025-05-14 03:31:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:49.445261 | orchestrator | 2025-05-14 03:31:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:49.445372 | orchestrator | 2025-05-14 03:31:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:52.493459 | orchestrator | 2025-05-14 03:31:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:52.493532 | orchestrator | 2025-05-14 03:31:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:55.528125 | orchestrator | 2025-05-14 03:31:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:55.528222 | orchestrator | 2025-05-14 03:31:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:58.571988 | orchestrator | 2025-05-14 03:31:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:31:58.572137 | orchestrator | 2025-05-14 03:31:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:01.623809 | orchestrator | 2025-05-14 03:32:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:01.623907 | orchestrator | 2025-05-14 03:32:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:04.668181 | orchestrator | 2025-05-14 03:32:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:04.668261 | orchestrator | 2025-05-14 03:32:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:07.723495 | orchestrator | 2025-05-14 03:32:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:07.723608 | orchestrator | 2025-05-14 03:32:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:10.769359 | orchestrator | 2025-05-14 03:32:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:10.769466 | orchestrator | 2025-05-14 03:32:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:13.818570 | orchestrator | 2025-05-14 03:32:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:13.818648 | orchestrator | 2025-05-14 03:32:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:16.867192 | orchestrator | 2025-05-14 03:32:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:16.867287 | orchestrator | 2025-05-14 03:32:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:19.917275 | orchestrator | 2025-05-14 03:32:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:19.917396 | orchestrator | 2025-05-14 03:32:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:22.961021 | orchestrator | 2025-05-14 03:32:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:22.961159 | orchestrator | 2025-05-14 03:32:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:26.017923 | orchestrator | 2025-05-14 03:32:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:26.018155 | orchestrator | 2025-05-14 03:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:29.068463 | orchestrator | 2025-05-14 03:32:29 | INFO  | Task f6f14918-32ec-47f0-9362-59aed21b9641 is in state STARTED 2025-05-14 03:32:29.069238 | orchestrator | 2025-05-14 03:32:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:29.069288 | orchestrator | 2025-05-14 03:32:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:32.123415 | orchestrator | 2025-05-14 03:32:32 | INFO  | Task f6f14918-32ec-47f0-9362-59aed21b9641 is in state STARTED 2025-05-14 03:32:32.124735 | orchestrator | 2025-05-14 03:32:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:32.124926 | orchestrator | 2025-05-14 03:32:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:35.172237 | orchestrator | 2025-05-14 03:32:35 | INFO  | Task f6f14918-32ec-47f0-9362-59aed21b9641 is in state STARTED 2025-05-14 03:32:35.173007 | orchestrator | 2025-05-14 03:32:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:35.173276 | orchestrator | 2025-05-14 03:32:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:38.232166 | orchestrator | 2025-05-14 03:32:38 | INFO  | Task f6f14918-32ec-47f0-9362-59aed21b9641 is in state STARTED 2025-05-14 03:32:38.233979 | orchestrator | 2025-05-14 03:32:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:38.234143 | orchestrator | 2025-05-14 03:32:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:41.275029 | orchestrator | 2025-05-14 03:32:41 | INFO  | Task f6f14918-32ec-47f0-9362-59aed21b9641 is in state SUCCESS 2025-05-14 03:32:41.276136 | orchestrator | 2025-05-14 03:32:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:41.276192 | orchestrator | 2025-05-14 03:32:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:44.325254 | orchestrator | 2025-05-14 03:32:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:44.325386 | orchestrator | 2025-05-14 03:32:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:47.375547 | orchestrator | 2025-05-14 03:32:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:47.375679 | orchestrator | 2025-05-14 03:32:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:50.427262 | orchestrator | 2025-05-14 03:32:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:50.427408 | orchestrator | 2025-05-14 03:32:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:53.489725 | orchestrator | 2025-05-14 03:32:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:53.489834 | orchestrator | 2025-05-14 03:32:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:56.543256 | orchestrator | 2025-05-14 03:32:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:56.543360 | orchestrator | 2025-05-14 03:32:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:59.599612 | orchestrator | 2025-05-14 03:32:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:32:59.599710 | orchestrator | 2025-05-14 03:32:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:02.645568 | orchestrator | 2025-05-14 03:33:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:02.645692 | orchestrator | 2025-05-14 03:33:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:05.699000 | orchestrator | 2025-05-14 03:33:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:05.699152 | orchestrator | 2025-05-14 03:33:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:08.741941 | orchestrator | 2025-05-14 03:33:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:08.742157 | orchestrator | 2025-05-14 03:33:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:11.791487 | orchestrator | 2025-05-14 03:33:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:11.791582 | orchestrator | 2025-05-14 03:33:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:14.845575 | orchestrator | 2025-05-14 03:33:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:14.845656 | orchestrator | 2025-05-14 03:33:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:17.897438 | orchestrator | 2025-05-14 03:33:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:17.897563 | orchestrator | 2025-05-14 03:33:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:20.944205 | orchestrator | 2025-05-14 03:33:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:20.944289 | orchestrator | 2025-05-14 03:33:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:23.997172 | orchestrator | 2025-05-14 03:33:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:23.997294 | orchestrator | 2025-05-14 03:33:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:27.046835 | orchestrator | 2025-05-14 03:33:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:27.046924 | orchestrator | 2025-05-14 03:33:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:30.083437 | orchestrator | 2025-05-14 03:33:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:30.083531 | orchestrator | 2025-05-14 03:33:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:33.121261 | orchestrator | 2025-05-14 03:33:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:33.121408 | orchestrator | 2025-05-14 03:33:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:36.161882 | orchestrator | 2025-05-14 03:33:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:36.161980 | orchestrator | 2025-05-14 03:33:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:39.213332 | orchestrator | 2025-05-14 03:33:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:39.213436 | orchestrator | 2025-05-14 03:33:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:42.271909 | orchestrator | 2025-05-14 03:33:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:42.272007 | orchestrator | 2025-05-14 03:33:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:45.316490 | orchestrator | 2025-05-14 03:33:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:45.316580 | orchestrator | 2025-05-14 03:33:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:48.365217 | orchestrator | 2025-05-14 03:33:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:48.365334 | orchestrator | 2025-05-14 03:33:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:51.417306 | orchestrator | 2025-05-14 03:33:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:51.417407 | orchestrator | 2025-05-14 03:33:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:54.459260 | orchestrator | 2025-05-14 03:33:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:54.459355 | orchestrator | 2025-05-14 03:33:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:57.504526 | orchestrator | 2025-05-14 03:33:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:33:57.504633 | orchestrator | 2025-05-14 03:33:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:00.554560 | orchestrator | 2025-05-14 03:34:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:00.554656 | orchestrator | 2025-05-14 03:34:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:03.606968 | orchestrator | 2025-05-14 03:34:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:03.607143 | orchestrator | 2025-05-14 03:34:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:06.651961 | orchestrator | 2025-05-14 03:34:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:06.652099 | orchestrator | 2025-05-14 03:34:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:09.698412 | orchestrator | 2025-05-14 03:34:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:09.698526 | orchestrator | 2025-05-14 03:34:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:12.751358 | orchestrator | 2025-05-14 03:34:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:12.751475 | orchestrator | 2025-05-14 03:34:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:15.802226 | orchestrator | 2025-05-14 03:34:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:15.802355 | orchestrator | 2025-05-14 03:34:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:18.850392 | orchestrator | 2025-05-14 03:34:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:18.850540 | orchestrator | 2025-05-14 03:34:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:21.897629 | orchestrator | 2025-05-14 03:34:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:21.897738 | orchestrator | 2025-05-14 03:34:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:24.947763 | orchestrator | 2025-05-14 03:34:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:24.947851 | orchestrator | 2025-05-14 03:34:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:27.995488 | orchestrator | 2025-05-14 03:34:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:27.995724 | orchestrator | 2025-05-14 03:34:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:31.051963 | orchestrator | 2025-05-14 03:34:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:31.052087 | orchestrator | 2025-05-14 03:34:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:34.100458 | orchestrator | 2025-05-14 03:34:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:34.100556 | orchestrator | 2025-05-14 03:34:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:37.147843 | orchestrator | 2025-05-14 03:34:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:37.147961 | orchestrator | 2025-05-14 03:34:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:40.188733 | orchestrator | 2025-05-14 03:34:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:40.188872 | orchestrator | 2025-05-14 03:34:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:43.232189 | orchestrator | 2025-05-14 03:34:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:43.232335 | orchestrator | 2025-05-14 03:34:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:46.283474 | orchestrator | 2025-05-14 03:34:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:46.283578 | orchestrator | 2025-05-14 03:34:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:49.334367 | orchestrator | 2025-05-14 03:34:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:49.334468 | orchestrator | 2025-05-14 03:34:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:52.379803 | orchestrator | 2025-05-14 03:34:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:52.379899 | orchestrator | 2025-05-14 03:34:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:55.423345 | orchestrator | 2025-05-14 03:34:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:55.423500 | orchestrator | 2025-05-14 03:34:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:58.470158 | orchestrator | 2025-05-14 03:34:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:34:58.470266 | orchestrator | 2025-05-14 03:34:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:01.520730 | orchestrator | 2025-05-14 03:35:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:01.520826 | orchestrator | 2025-05-14 03:35:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:04.569924 | orchestrator | 2025-05-14 03:35:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:04.570252 | orchestrator | 2025-05-14 03:35:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:07.619502 | orchestrator | 2025-05-14 03:35:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:07.619602 | orchestrator | 2025-05-14 03:35:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:10.671091 | orchestrator | 2025-05-14 03:35:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:10.671226 | orchestrator | 2025-05-14 03:35:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:13.704881 | orchestrator | 2025-05-14 03:35:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:13.704996 | orchestrator | 2025-05-14 03:35:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:16.759783 | orchestrator | 2025-05-14 03:35:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:16.759926 | orchestrator | 2025-05-14 03:35:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:19.801299 | orchestrator | 2025-05-14 03:35:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:19.801406 | orchestrator | 2025-05-14 03:35:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:22.846685 | orchestrator | 2025-05-14 03:35:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:22.846794 | orchestrator | 2025-05-14 03:35:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:25.893550 | orchestrator | 2025-05-14 03:35:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:25.893652 | orchestrator | 2025-05-14 03:35:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:28.945315 | orchestrator | 2025-05-14 03:35:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:28.945414 | orchestrator | 2025-05-14 03:35:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:32.010563 | orchestrator | 2025-05-14 03:35:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:32.010698 | orchestrator | 2025-05-14 03:35:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:35.055842 | orchestrator | 2025-05-14 03:35:35 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:35.055963 | orchestrator | 2025-05-14 03:35:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:38.098342 | orchestrator | 2025-05-14 03:35:38 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:38.098443 | orchestrator | 2025-05-14 03:35:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:41.147867 | orchestrator | 2025-05-14 03:35:41 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:41.148055 | orchestrator | 2025-05-14 03:35:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:44.190614 | orchestrator | 2025-05-14 03:35:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:44.190718 | orchestrator | 2025-05-14 03:35:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:47.240765 | orchestrator | 2025-05-14 03:35:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:47.240837 | orchestrator | 2025-05-14 03:35:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:50.286654 | orchestrator | 2025-05-14 03:35:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:50.286755 | orchestrator | 2025-05-14 03:35:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:53.334840 | orchestrator | 2025-05-14 03:35:53 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:53.334941 | orchestrator | 2025-05-14 03:35:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:56.373928 | orchestrator | 2025-05-14 03:35:56 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:56.374175 | orchestrator | 2025-05-14 03:35:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:59.422186 | orchestrator | 2025-05-14 03:35:59 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:35:59.422285 | orchestrator | 2025-05-14 03:35:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:02.478548 | orchestrator | 2025-05-14 03:36:02 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:02.478670 | orchestrator | 2025-05-14 03:36:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:05.525446 | orchestrator | 2025-05-14 03:36:05 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:05.525593 | orchestrator | 2025-05-14 03:36:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:08.574352 | orchestrator | 2025-05-14 03:36:08 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:08.574472 | orchestrator | 2025-05-14 03:36:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:11.619218 | orchestrator | 2025-05-14 03:36:11 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:11.619357 | orchestrator | 2025-05-14 03:36:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:14.668059 | orchestrator | 2025-05-14 03:36:14 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:14.668159 | orchestrator | 2025-05-14 03:36:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:17.722954 | orchestrator | 2025-05-14 03:36:17 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:17.723118 | orchestrator | 2025-05-14 03:36:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:20.771256 | orchestrator | 2025-05-14 03:36:20 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:20.771368 | orchestrator | 2025-05-14 03:36:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:23.819862 | orchestrator | 2025-05-14 03:36:23 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:23.819958 | orchestrator | 2025-05-14 03:36:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:26.875174 | orchestrator | 2025-05-14 03:36:26 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:26.875277 | orchestrator | 2025-05-14 03:36:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:29.920581 | orchestrator | 2025-05-14 03:36:29 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:29.920658 | orchestrator | 2025-05-14 03:36:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:32.966935 | orchestrator | 2025-05-14 03:36:32 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:32.967091 | orchestrator | 2025-05-14 03:36:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:36.013185 | orchestrator | 2025-05-14 03:36:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:36.013301 | orchestrator | 2025-05-14 03:36:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:39.049271 | orchestrator | 2025-05-14 03:36:39 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:39.049372 | orchestrator | 2025-05-14 03:36:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:42.097357 | orchestrator | 2025-05-14 03:36:42 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:42.097456 | orchestrator | 2025-05-14 03:36:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:45.148517 | orchestrator | 2025-05-14 03:36:45 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:45.148648 | orchestrator | 2025-05-14 03:36:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:48.197527 | orchestrator | 2025-05-14 03:36:48 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:48.197630 | orchestrator | 2025-05-14 03:36:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:51.235652 | orchestrator | 2025-05-14 03:36:51 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:51.235732 | orchestrator | 2025-05-14 03:36:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:54.282677 | orchestrator | 2025-05-14 03:36:54 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:54.282769 | orchestrator | 2025-05-14 03:36:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:57.344242 | orchestrator | 2025-05-14 03:36:57 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:36:57.344362 | orchestrator | 2025-05-14 03:36:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:00.385480 | orchestrator | 2025-05-14 03:37:00 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:00.385592 | orchestrator | 2025-05-14 03:37:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:03.431469 | orchestrator | 2025-05-14 03:37:03 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:03.431561 | orchestrator | 2025-05-14 03:37:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:06.477750 | orchestrator | 2025-05-14 03:37:06 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:06.477846 | orchestrator | 2025-05-14 03:37:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:09.534187 | orchestrator | 2025-05-14 03:37:09 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:09.534313 | orchestrator | 2025-05-14 03:37:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:12.579532 | orchestrator | 2025-05-14 03:37:12 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:12.579633 | orchestrator | 2025-05-14 03:37:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:15.623649 | orchestrator | 2025-05-14 03:37:15 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:15.623751 | orchestrator | 2025-05-14 03:37:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:18.674218 | orchestrator | 2025-05-14 03:37:18 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:18.674335 | orchestrator | 2025-05-14 03:37:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:21.726606 | orchestrator | 2025-05-14 03:37:21 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:21.726713 | orchestrator | 2025-05-14 03:37:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:24.761581 | orchestrator | 2025-05-14 03:37:24 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:24.761693 | orchestrator | 2025-05-14 03:37:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:27.801521 | orchestrator | 2025-05-14 03:37:27 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:27.801615 | orchestrator | 2025-05-14 03:37:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:30.846649 | orchestrator | 2025-05-14 03:37:30 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:30.846771 | orchestrator | 2025-05-14 03:37:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:33.902493 | orchestrator | 2025-05-14 03:37:33 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:33.902593 | orchestrator | 2025-05-14 03:37:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:36.945310 | orchestrator | 2025-05-14 03:37:36 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:37.014458 | orchestrator | 2025-05-14 03:37:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:40.001769 | orchestrator | 2025-05-14 03:37:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:40.001880 | orchestrator | 2025-05-14 03:37:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:43.053336 | orchestrator | 2025-05-14 03:37:43 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:43.053444 | orchestrator | 2025-05-14 03:37:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:46.097295 | orchestrator | 2025-05-14 03:37:46 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:46.097426 | orchestrator | 2025-05-14 03:37:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:49.136202 | orchestrator | 2025-05-14 03:37:49 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:49.136312 | orchestrator | 2025-05-14 03:37:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:52.170755 | orchestrator | 2025-05-14 03:37:52 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:52.170835 | orchestrator | 2025-05-14 03:37:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:55.221337 | orchestrator | 2025-05-14 03:37:55 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:55.221475 | orchestrator | 2025-05-14 03:37:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:58.267187 | orchestrator | 2025-05-14 03:37:58 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:37:58.267342 | orchestrator | 2025-05-14 03:37:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:01.309191 | orchestrator | 2025-05-14 03:38:01 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:01.309318 | orchestrator | 2025-05-14 03:38:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:04.364037 | orchestrator | 2025-05-14 03:38:04 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:04.364137 | orchestrator | 2025-05-14 03:38:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:07.413221 | orchestrator | 2025-05-14 03:38:07 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:07.413331 | orchestrator | 2025-05-14 03:38:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:10.458390 | orchestrator | 2025-05-14 03:38:10 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:10.458459 | orchestrator | 2025-05-14 03:38:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:13.506531 | orchestrator | 2025-05-14 03:38:13 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:13.506645 | orchestrator | 2025-05-14 03:38:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:16.557588 | orchestrator | 2025-05-14 03:38:16 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:16.557752 | orchestrator | 2025-05-14 03:38:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:19.616579 | orchestrator | 2025-05-14 03:38:19 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:19.616672 | orchestrator | 2025-05-14 03:38:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:22.669357 | orchestrator | 2025-05-14 03:38:22 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:22.669455 | orchestrator | 2025-05-14 03:38:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:25.707260 | orchestrator | 2025-05-14 03:38:25 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:25.707361 | orchestrator | 2025-05-14 03:38:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:28.751320 | orchestrator | 2025-05-14 03:38:28 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:28.751422 | orchestrator | 2025-05-14 03:38:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:31.796441 | orchestrator | 2025-05-14 03:38:31 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:31.796575 | orchestrator | 2025-05-14 03:38:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:34.845000 | orchestrator | 2025-05-14 03:38:34 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:34.845116 | orchestrator | 2025-05-14 03:38:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:37.900177 | orchestrator | 2025-05-14 03:38:37 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:37.900275 | orchestrator | 2025-05-14 03:38:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:40.951852 | orchestrator | 2025-05-14 03:38:40 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:40.952030 | orchestrator | 2025-05-14 03:38:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:44.001586 | orchestrator | 2025-05-14 03:38:44 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:44.001726 | orchestrator | 2025-05-14 03:38:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:47.049329 | orchestrator | 2025-05-14 03:38:47 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:47.049452 | orchestrator | 2025-05-14 03:38:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:50.099440 | orchestrator | 2025-05-14 03:38:50 | INFO  | Task d96aeed1-a30d-4e84-85b3-93c7cfc3e055 is in state STARTED 2025-05-14 03:38:50.099548 | orchestrator | 2025-05-14 03:38:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:50.557792 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-14 03:38:50.562190 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-14 03:38:51.325893 | 2025-05-14 03:38:51.326132 | PLAY [Post output play] 2025-05-14 03:38:51.343465 | 2025-05-14 03:38:51.343622 | LOOP [stage-output : Register sources] 2025-05-14 03:38:51.415203 | 2025-05-14 03:38:51.415571 | TASK [stage-output : Check sudo] 2025-05-14 03:38:52.304544 | orchestrator | sudo: a password is required 2025-05-14 03:38:52.460193 | orchestrator | ok: Runtime: 0:00:00.020546 2025-05-14 03:38:52.474116 | 2025-05-14 03:38:52.474312 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-14 03:38:52.510337 | 2025-05-14 03:38:52.510568 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-14 03:38:52.588432 | orchestrator | ok 2025-05-14 03:38:52.597412 | 2025-05-14 03:38:52.597547 | LOOP [stage-output : Ensure target folders exist] 2025-05-14 03:38:53.072005 | orchestrator | ok: "docs" 2025-05-14 03:38:53.072360 | 2025-05-14 03:38:53.333623 | orchestrator | ok: "artifacts" 2025-05-14 03:38:53.615155 | orchestrator | ok: "logs" 2025-05-14 03:38:53.637607 | 2025-05-14 03:38:53.637786 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-14 03:38:53.674747 | 2025-05-14 03:38:53.675038 | TASK [stage-output : Make all log files readable] 2025-05-14 03:38:53.988777 | orchestrator | ok 2025-05-14 03:38:53.999541 | 2025-05-14 03:38:53.999681 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-14 03:38:54.034885 | orchestrator | skipping: Conditional result was False 2025-05-14 03:38:54.048762 | 2025-05-14 03:38:54.048920 | TASK [stage-output : Discover log files for compression] 2025-05-14 03:38:54.075451 | orchestrator | skipping: Conditional result was False 2025-05-14 03:38:54.089874 | 2025-05-14 03:38:54.090025 | LOOP [stage-output : Archive everything from logs] 2025-05-14 03:38:54.136428 | 2025-05-14 03:38:54.136616 | PLAY [Post cleanup play] 2025-05-14 03:38:54.144777 | 2025-05-14 03:38:54.144888 | TASK [Set cloud fact (Zuul deployment)] 2025-05-14 03:38:54.212754 | orchestrator | ok 2025-05-14 03:38:54.224564 | 2025-05-14 03:38:54.224719 | TASK [Set cloud fact (local deployment)] 2025-05-14 03:38:54.261022 | orchestrator | skipping: Conditional result was False 2025-05-14 03:38:54.278610 | 2025-05-14 03:38:54.278789 | TASK [Clean the cloud environment] 2025-05-14 03:38:55.286299 | orchestrator | 2025-05-14 03:38:55 - clean up servers 2025-05-14 03:38:56.129192 | orchestrator | 2025-05-14 03:38:56 - testbed-manager 2025-05-14 03:38:57.261828 | orchestrator | 2025-05-14 03:38:57 - testbed-node-2 2025-05-14 03:38:57.351724 | orchestrator | 2025-05-14 03:38:57 - testbed-node-5 2025-05-14 03:38:57.442087 | orchestrator | 2025-05-14 03:38:57 - testbed-node-0 2025-05-14 03:38:57.528629 | orchestrator | 2025-05-14 03:38:57 - testbed-node-3 2025-05-14 03:38:57.619478 | orchestrator | 2025-05-14 03:38:57 - testbed-node-1 2025-05-14 03:38:57.709598 | orchestrator | 2025-05-14 03:38:57 - testbed-node-4 2025-05-14 03:38:57.795618 | orchestrator | 2025-05-14 03:38:57 - clean up keypairs 2025-05-14 03:38:57.814437 | orchestrator | 2025-05-14 03:38:57 - testbed 2025-05-14 03:38:57.838282 | orchestrator | 2025-05-14 03:38:57 - wait for servers to be gone 2025-05-14 03:39:11.069748 | orchestrator | 2025-05-14 03:39:11 - clean up ports 2025-05-14 03:39:11.291490 | orchestrator | 2025-05-14 03:39:11 - 2e85b7d6-9d24-444b-be1b-4bbfc53549bb 2025-05-14 03:39:11.521670 | orchestrator | 2025-05-14 03:39:11 - 6836d4d2-c44a-4563-9160-e5ae9c467c9d 2025-05-14 03:39:11.789760 | orchestrator | 2025-05-14 03:39:11 - 7817fa54-20fc-469a-a791-94d2b89eadc3 2025-05-14 03:39:11.971389 | orchestrator | 2025-05-14 03:39:11 - 9ddf52f4-9262-4fc7-b417-b54182336a82 2025-05-14 03:39:12.156070 | orchestrator | 2025-05-14 03:39:12 - c588f8a8-f7d4-4e3d-aad2-5738a1ce04ed 2025-05-14 03:39:12.608322 | orchestrator | 2025-05-14 03:39:12 - e9c6248d-bfd6-4ea3-ba7f-0b22ee2b2865 2025-05-14 03:39:12.786110 | orchestrator | 2025-05-14 03:39:12 - f2a02b92-a4ab-485b-855a-f31ab136e1ae 2025-05-14 03:39:12.975725 | orchestrator | 2025-05-14 03:39:12 - clean up volumes 2025-05-14 03:39:13.107697 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-1-node-base 2025-05-14 03:39:13.151748 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-2-node-base 2025-05-14 03:39:13.193304 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-0-node-base 2025-05-14 03:39:13.233133 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-4-node-base 2025-05-14 03:39:13.277998 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-3-node-base 2025-05-14 03:39:13.320730 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-manager-base 2025-05-14 03:39:13.361439 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-5-node-base 2025-05-14 03:39:13.405178 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-3-node-3 2025-05-14 03:39:13.453174 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-7-node-4 2025-05-14 03:39:13.497444 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-2-node-5 2025-05-14 03:39:13.545598 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-0-node-3 2025-05-14 03:39:13.594466 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-6-node-3 2025-05-14 03:39:13.635079 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-4-node-4 2025-05-14 03:39:13.674558 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-1-node-4 2025-05-14 03:39:13.716925 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-5-node-5 2025-05-14 03:39:13.757166 | orchestrator | 2025-05-14 03:39:13 - testbed-volume-8-node-5 2025-05-14 03:39:13.800132 | orchestrator | 2025-05-14 03:39:13 - disconnect routers 2025-05-14 03:39:13.861541 | orchestrator | 2025-05-14 03:39:13 - testbed 2025-05-14 03:39:14.544843 | orchestrator | 2025-05-14 03:39:14 - clean up subnets 2025-05-14 03:39:14.583415 | orchestrator | 2025-05-14 03:39:14 - subnet-testbed-management 2025-05-14 03:39:14.798159 | orchestrator | 2025-05-14 03:39:14 - clean up networks 2025-05-14 03:39:14.970805 | orchestrator | 2025-05-14 03:39:14 - net-testbed-management 2025-05-14 03:39:15.258168 | orchestrator | 2025-05-14 03:39:15 - clean up security groups 2025-05-14 03:39:15.293129 | orchestrator | 2025-05-14 03:39:15 - testbed-management 2025-05-14 03:39:15.381024 | orchestrator | 2025-05-14 03:39:15 - testbed-node 2025-05-14 03:39:15.475235 | orchestrator | 2025-05-14 03:39:15 - clean up floating ips 2025-05-14 03:39:15.505141 | orchestrator | 2025-05-14 03:39:15 - 81.163.193.246 2025-05-14 03:39:15.898303 | orchestrator | 2025-05-14 03:39:15 - clean up routers 2025-05-14 03:39:15.948168 | orchestrator | 2025-05-14 03:39:15 - testbed 2025-05-14 03:39:16.836813 | orchestrator | ok: Runtime: 0:00:21.967233 2025-05-14 03:39:16.842761 | 2025-05-14 03:39:16.843014 | PLAY RECAP 2025-05-14 03:39:16.843147 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-14 03:39:16.843226 | 2025-05-14 03:39:16.998058 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-14 03:39:17.000705 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-14 03:39:17.739218 | 2025-05-14 03:39:17.739390 | PLAY [Cleanup play] 2025-05-14 03:39:17.755451 | 2025-05-14 03:39:17.755580 | TASK [Set cloud fact (Zuul deployment)] 2025-05-14 03:39:17.810561 | orchestrator | ok 2025-05-14 03:39:17.819208 | 2025-05-14 03:39:17.819352 | TASK [Set cloud fact (local deployment)] 2025-05-14 03:39:17.853345 | orchestrator | skipping: Conditional result was False 2025-05-14 03:39:17.866471 | 2025-05-14 03:39:17.866603 | TASK [Clean the cloud environment] 2025-05-14 03:39:19.070556 | orchestrator | 2025-05-14 03:39:19 - clean up servers 2025-05-14 03:39:19.577784 | orchestrator | 2025-05-14 03:39:19 - clean up keypairs 2025-05-14 03:39:19.597019 | orchestrator | 2025-05-14 03:39:19 - wait for servers to be gone 2025-05-14 03:39:19.681556 | orchestrator | 2025-05-14 03:39:19 - clean up ports 2025-05-14 03:39:19.760844 | orchestrator | 2025-05-14 03:39:19 - clean up volumes 2025-05-14 03:39:19.840680 | orchestrator | 2025-05-14 03:39:19 - disconnect routers 2025-05-14 03:39:19.863165 | orchestrator | 2025-05-14 03:39:19 - clean up subnets 2025-05-14 03:39:19.883105 | orchestrator | 2025-05-14 03:39:19 - clean up networks 2025-05-14 03:39:20.022769 | orchestrator | 2025-05-14 03:39:20 - clean up security groups 2025-05-14 03:39:20.045181 | orchestrator | 2025-05-14 03:39:20 - clean up floating ips 2025-05-14 03:39:20.071220 | orchestrator | 2025-05-14 03:39:20 - clean up routers 2025-05-14 03:39:20.403812 | orchestrator | ok: Runtime: 0:00:01.389154 2025-05-14 03:39:20.407772 | 2025-05-14 03:39:20.407937 | PLAY RECAP 2025-05-14 03:39:20.408059 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-14 03:39:20.408122 | 2025-05-14 03:39:20.535826 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-14 03:39:20.538305 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-14 03:39:21.315269 | 2025-05-14 03:39:21.315465 | PLAY [Base post-fetch] 2025-05-14 03:39:21.331218 | 2025-05-14 03:39:21.331434 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-14 03:39:21.387805 | orchestrator | skipping: Conditional result was False 2025-05-14 03:39:21.405641 | 2025-05-14 03:39:21.405953 | TASK [fetch-output : Set log path for single node] 2025-05-14 03:39:21.466348 | orchestrator | ok 2025-05-14 03:39:21.475956 | 2025-05-14 03:39:21.476119 | LOOP [fetch-output : Ensure local output dirs] 2025-05-14 03:39:21.972768 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/be0688f9253a41979af5a828f2206be5/work/logs" 2025-05-14 03:39:22.255369 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/be0688f9253a41979af5a828f2206be5/work/artifacts" 2025-05-14 03:39:22.548564 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/be0688f9253a41979af5a828f2206be5/work/docs" 2025-05-14 03:39:22.574008 | 2025-05-14 03:39:22.574181 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-14 03:39:23.556466 | orchestrator | changed: .d..t...... ./ 2025-05-14 03:39:23.556936 | orchestrator | changed: All items complete 2025-05-14 03:39:23.557012 | 2025-05-14 03:39:24.347823 | orchestrator | changed: .d..t...... ./ 2025-05-14 03:39:25.122962 | orchestrator | changed: .d..t...... ./ 2025-05-14 03:39:25.150176 | 2025-05-14 03:39:25.150344 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-14 03:39:25.176182 | orchestrator | skipping: Conditional result was False 2025-05-14 03:39:25.180528 | orchestrator | skipping: Conditional result was False 2025-05-14 03:39:25.202062 | 2025-05-14 03:39:25.202188 | PLAY RECAP 2025-05-14 03:39:25.202271 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-14 03:39:25.202344 | 2025-05-14 03:39:25.328942 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-14 03:39:25.331704 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-14 03:39:26.062111 | 2025-05-14 03:39:26.062272 | PLAY [Base post] 2025-05-14 03:39:26.077084 | 2025-05-14 03:39:26.077220 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-14 03:39:27.148992 | orchestrator | changed 2025-05-14 03:39:27.158324 | 2025-05-14 03:39:27.158466 | PLAY RECAP 2025-05-14 03:39:27.158544 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-14 03:39:27.158623 | 2025-05-14 03:39:27.276253 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-14 03:39:27.277255 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-14 03:39:28.070929 | 2025-05-14 03:39:28.071105 | PLAY [Base post-logs] 2025-05-14 03:39:28.081886 | 2025-05-14 03:39:28.082029 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-14 03:39:28.548384 | localhost | changed 2025-05-14 03:39:28.572724 | 2025-05-14 03:39:28.572974 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-14 03:39:28.612947 | localhost | ok 2025-05-14 03:39:28.619963 | 2025-05-14 03:39:28.620139 | TASK [Set zuul-log-path fact] 2025-05-14 03:39:28.647636 | localhost | ok 2025-05-14 03:39:28.661658 | 2025-05-14 03:39:28.661816 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-14 03:39:28.688390 | localhost | ok 2025-05-14 03:39:28.693025 | 2025-05-14 03:39:28.693173 | TASK [upload-logs : Create log directories] 2025-05-14 03:39:29.211081 | localhost | changed 2025-05-14 03:39:29.214027 | 2025-05-14 03:39:29.214132 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-14 03:39:29.696056 | localhost -> localhost | ok: Runtime: 0:00:00.007416 2025-05-14 03:39:29.700371 | 2025-05-14 03:39:29.700486 | TASK [upload-logs : Upload logs to log server] 2025-05-14 03:39:30.306088 | localhost | Output suppressed because no_log was given 2025-05-14 03:39:30.309262 | 2025-05-14 03:39:30.309440 | LOOP [upload-logs : Compress console log and json output] 2025-05-14 03:39:30.368428 | localhost | skipping: Conditional result was False 2025-05-14 03:39:30.373846 | localhost | skipping: Conditional result was False 2025-05-14 03:39:30.380097 | 2025-05-14 03:39:30.380275 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-14 03:39:30.433086 | localhost | skipping: Conditional result was False 2025-05-14 03:39:30.433803 | 2025-05-14 03:39:30.436808 | localhost | skipping: Conditional result was False 2025-05-14 03:39:30.444749 | 2025-05-14 03:39:30.444983 | LOOP [upload-logs : Upload console log and json output]